id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
1791769
|
pes2o/s2orc
|
v3-fos-license
|
Linked color imaging (LCI), a novel image-enhanced endoscopy technology, emphasizes the color of early gastric cancer
Background and study aims Linked color imaging (LCI) and blue laser imaging (BLI) are novel image-enhanced endoscopy technologies with strong, unique color enhancement. We investigated the efficacy of LCI and BLI-bright compared to conventional white light imaging (WLI) by measuring the color difference between early gastric cancer lesions and the surrounding mucosa. Patients and methods Images of early gastric cancer scheduled for endoscopic submucosal dissection were captured by LCI, BLI-bright, and WLI under the same conditions. Color values of the lesion and surrounding mucosa were defined as the average of the color value in each region of interest. Color differences between the lesion and surrounding mucosa (ΔE) were examined in each mode. The color value was assessed using the CIE L*a*b* color space (CIE: Commission Internationale d’Eclairage). Results We collected images of 43 lesions from 42 patients. Average ΔE values with LCI, BLI-bright, and WLI were 11.02, 5.04, and 5.99, respectively. The ΔE was significantly higher with LCI than with WLI ( P < 0.001). Limited to cases of small ΔE with WLI, the ΔE was approximately 3 times higher with LCI than with WLI (7.18 vs. 2.25). The ΔE with LCI was larger when the surrounding mucosa had severe intestinal metaplasia ( P = 0.04). The average color value of a lesion and the surrounding mucosa differed. This value did not have a sufficient cut-off point between the lesion and surrounding mucosa to distinguish them, even with LCI. Conclusion LCI had a larger ΔE than WLI. It may allow easy recognition and early detection of gastric cancer, even for inexperienced endoscopists.
and blue laser imaging (BLI) are novel image-enhanced endoscopy technologies with strong, unique color enhancement. We investigated the efficacy of LCI and BLIbright compared to conventional white light imaging (WLI) by measuring the color difference between early gastric cancer lesions and the surrounding mucosa.
Patients and methods Images of early gastric cancer scheduled for endoscopic submucosal dissection were captured by LCI, BLI-bright, and WLI under the same conditions. Color values of the lesion and surrounding mucosa were defined as the average of the color value in each region of interest. Color differences between the lesion and surrounding mucosa (ΔE) were examined in each mode. The color value was assessed using the CIE L*a*b* color space (CIE: Commission Internationale d'Eclairage).
Results
We collected images of 43 lesions from 42 patients. Average ΔE values with LCI, BLI-bright, and WLI were 11.02, 5.04, and 5.99, respectively. The ΔE was significantly higher with LCI than with WLI (P < 0.001). Limited to cases of small ΔE with WLI, the ΔE was approximately 3 times higher with LCI than with WLI (7.18 vs. 2.25). The ΔE with LCI was larger when the surrounding mucosa had severe intestinal metaplasia (P = 0.04). The average color value of a lesion and the surrounding mucosa differed. This value did not have a sufficient cut-off point between the lesion and surrounding mucosa to distinguish them, even with LCI.
Conclusion LCI had a larger ΔE than WLI. It may allow easy recognition and early detection of gastric cancer, even for inexperienced endoscopists.
Several reports have demonstrated the efficacy of equipment-based image-enhanced endoscopy (IEE) for diagnosing and detecting early gastrointestinal cancer [8 -10]. Its superiority results from the enhancement of the lesion color and ability to demarcate the lesion and surrounding mucosa. The efficacy of magnified endoscopy with IEE for the diagnosis of early gastric cancer has been widely reported [11 -14]. There have been reports of the advantages of IEE without magnification, but these were a pilot study and a small sample group [15,16]. The efficacy of IEE without magnification for the detection of early gastric cancer is still controversial. This lack of information is possibly because of the subtle color difference between gastric cancer lesions and the surrounding mucosa, even with IEE.
Linked color imaging (LCI) and blue laser imaging (BLI) are novel IEE technologies developed by Fujifilm Corporation (Tokyo, Japan). These endoscopic technologies use narrowband short wavelength light. Blue and green color information and red color information are separately corrected. BLI uses blue and green color information to produce red color-enhanced images, as in narrowband imaging (NBI). LCI uses the information of all three colors. Unlike conventional white light imaging (WLI), the captured image is output with color enhancement in its own color range (e. g., red is changed to vivid red and white to clear white) by unique image processing [17 -21].
We speculated that this novel IEE system could produce a larger color difference between an early gastric cancer lesion and the surrounding mucosa. This difference may allow the early recognition of a lesion, even for less experienced physicians. Moreover, we speculated that the unique color of cancer could be determined by using LCI.
In this study, we investigated the visibility of early gastric cancer in each mode by evaluating the color difference between the cancer lesion and surrounding mucosa. We attempted to determine a cancer's unique color with LCI by measuring the color value of the lesion and surrounding mucosa.
Patients and methods
This study was conducted as a retrospective image analysis study. Patients who were examined using the LASEREO system (FujiFilm Corporation) before they underwent endoscopic submucosal dissection (ESD) at Okayama University Hospital (Okayama City, Japan) and Tsuyama Chuo Hospital (Tsuyama, Japan) from October 2014 to January 2016 were included in this study.
The ethical review boards of Okayama University Hospital and Tsuyama Chuo Hospital approved this retrospective chart review and analysis of the procedural data used in this study.
Instruments
The LASEREO system (Fujifilm, Tokyo, Japan) with an upper gastrointestinal endoscope (EG-L590ZW; Fujifilm) was used in this study. This endoscopy system uses 410-nm and 450-nm narrowband lasers instead of the conventional Xenon lamp, and produces three modes of IEE in addition to the conventional WLI: BLI, BLI-bright, and LCI. The 450-nm wavelength light ex-cites a phosphor in the tip of the scope to produce a wide wavelength light. The 410-nm wavelength light is a narrowband blue light that is strongly absorbed by a red object; therefore, this light is used to enhance red-colored objects. Blue and green information and red information are separately corrected by a charge-coupled device sensor in the tip of the scope. A BLI image consists of the blue and green information from the narrowband blue light illumination and produces red-enhanced images. The BLI-bright mode has sufficient light quantity with narrowband blue light to brighten a wide organ, such as the stomach, while producing a BLI-like image that enhances blood vessels on the surface of the mucosa [22]. An LCI image is acquired by the same illumination as BLI-bright. However, further image processing is carried out so that it has an appearance similar to WLI and that its colors are displayed more vividly (e. g., red is changed to vivid red and white to clear white). The IEE modes can be instantly changed using a button on the endoscopy handle. In this study, the WLI, BLI-bright, and LCI modes were used.
Image acquisition
We captured the early gastric cancer lesion images under the same conditions with WLI, BLI-bright and LCI. The endoscopic images of each lesion were captured carefully from a mid-range distance in WLI, BLI-bright, and LCI modes such that a similar distance and overall lightness were achieved in all the images taken in each mode. The border line of cancer and normal mucosa was captured with magnification in BLI. The images were taken by three expert endoscopists (Okayama University Hospital: HK and YK, Tsuyama Chuo hospital: RT).
Image selection
One image from each mode (i. e., WLI, BLI-bright, and LCI) was selected per lesion (▶ Fig. 1). The cases that met any of the following conditions were excluded: an image of a lesion not taken at the mid-range distance in all three IEE modes; an image for which mucosal color analysis was difficult because of an unusual color occupying the dominant area of the region of interest (ROI), such as an attachment of blood or pus or halation and shadow; lesion considered to be primarily an adenoma; lesion larger than 30 mm; and remnant stomach.
Image processing and color analysis
The color processing and analysis was performed with Adobe Photoshop CS4 (Adobe Systems Inc., San Jose, California, United States). The algorithm used to locate the ROI in the selected image is shown in ▶ Fig. 2. First, the border line between the lesion and surrounding mucosa was drawn on the image in reference to the histological examination of the ESD-resected tissue and many other endoscopic images captured through the procedure including with magnification. Second, an outside line was drawn parallel to the border line; the area enclosed by this line was twice as large as the area enclosed by the border line. Third, an inside line was drawn parallel to the border line; the width between the border line and the inside line was the same as the width between the outside line and the border line. A rectangle was drawn in the center of the lesion along the same illuminance axis with a width 1/4 to 1/3 as large as the width of the lesion. The axis of the rectangle was also set to avoid unusual color areas like shadow in the ROI of the surrounding mucosa. The ROI of the lesion and surrounding mucosa was the area enclosed by these lines. All the WLI, BLI-bright and LCI images underwent this image processing (▶ Fig. 3).
Small areas of unusual color due to attachment of mucus, bleeding, or halation were partially excluded from the ROI. The color values of the lesion (L* l, a* l, b* l ) and surrounding mucosa (L* s , a* s , b* s ) were defined as the average of the color value in each ROI using the CIE L*a*b* color space (CIE: Commission Internationale d'Eclairage) developed by the International Commission on Illumination in 1976 [23] (▶ Fig. 4). The color value was expressed with the three-dimensional color parameters L* (black to white; range, 0 to + 100), a* (green to red; range, -128 to + 127), and b* (blue to yellow; -128 to + 127). A positive value represented a shift toward white in axis L*, red in axis a*, and yellow in axis b*, which represent all colors visible to the human eye [24,25]; in other words, the CIE L*a*b* color model approximates human color perception. The relative perceptual differences between any two colors can be approximated by the color distance between them, as expressed by the CIE L*a*b* color value (i. e., L*, a*, and b*). The color difference between the lesion and surrounding mucosa (ΔE) is expressed by the following equation: The ΔE is expressed according to the evaluation criterion of the National Bureau of Standards (NBS) units of color difference ( ▶ Table 1). The ΔE was converted to NBS units using the following formula [26]: NBS units = ΔE × 0.92.
The ΔE in each mode was calculated and compared to that of WLI. The relationship between the ΔE and background factors of the patients and lesions was investigated in each mode.
The average color value of the lesion and surrounding mucosa was measured and compared in each mode to investigate the cancer's unique color.
Evaluation of background factors
The evaluation of endoscopic atrophy was based on the Kimura-Takemoto classification [27] and regarded as mild for C-I and C-II, moderate for C-III and O-I, and severe for O-II and O-III.
ESD was performed for all lesions that were used in this study. Resected specimens were step-sectioned lengthwise at 2-mm intervals [28] and then stained with hematoxylin and eosin for the histological examination. We evaluated the morphologic type, lesion size, histological type, histological assessment of inflammation, atrophy and intestinal metaplasia of the surrounding mucosa, based on the updated Sydney System [29]. A score of 1 or below was classified as mild, and a score of 2 or above was classified as severe. An expert pathologist performed all histological assessments.
To assess patients' Helicobacter pylori (H. pylori) status, at least 2 of the following examinations were performed in all patients: histological evaluation, bacterial culture, urea breath test, and serological H. pylori antibody test. We defined patients with positive H. pylori test results as having present H. pylori status. As H. pylori infection leads to intestinal metaplasia [30], patients with negative H. pylori test results and intestinal metaplasia in the surrounding mucosa were regarded as being in the post-infection phase. Patients with negative H. pylori test result and no intestinal metaplasia were defined as not infected.
Statistics
All statistical analyses were performed with statistical software (JMP PRO, version 12; SAS Institute Inc., Cary, North Carolina, United States). The relationship between the color difference and background factor in each mode was compared using the Student's t-test. A comparison of the color difference between WLI and the other modes, and the color value between the lesion and the surrounding mucosa was examined using the Wilcoxon signed-rank test. The cut-off point of the color value between the lesion and surrounding mucosa was determined by receiver operator characteristics curve (ROC) analysis. A two-sided P < 0.05 was considered statistically significant. ▶ Fig. 1 Imaging characteristics of the linked color imaging (LCI) and blue laser imaging (BLI)-bright modes.
Results
From October 2014 to January 2016, 101 lesions from 98 patients were examined using the LASEREO system before ESD at Okayama University Hospital and Tsuyama Chuo Hospital by three expert endoscopists. Almost half of the cases were excluded, and that was mainly because of the presence of sha-dow, halation, or adhesion of blood and pus in the ROI. The images of 43 lesions from 42 patients met our inclusion criteria and were sent for color analysis (▶ Fig. 5).
The patient and lesion characteristics in this study are shown in ▶ Table 2. The average ΔE values with WLI, BLI-bright, and LCI were 5.99, 5.04, and 11.02, respectively ( ▶ Table 3). The ΔE value with LCI was significantly higher and approximately ▶ Fig. 2 Image processing method used in the current study. a The image of the lesion is prepared, and the area of interest is extracted. b The border line between the lesion and surrounding mucosa was drawn based on the histological examination of the resected tissue using endoscopic submucosal dissection and other endoscopic images captured through the procedure including with magnification. c The inside and outside lines are drawn equidistant from the border line. d A rectangle intersecting the lesion along the minor axis of the stomach is drawn. The regions of interest (ROI) of the lesion and the surrounding mucosa are enclosed within these lines. The color value of the lesion and surrounding mucosa is represented by the average value within each ROI. The color difference between the lesion and surrounding mucosa (ΔE) is the difference between the averaged color value in the ROI of the lesion and the ROI of the surrounding mucosa.
twice that of WLI. Assessment of the color difference, based on the NBS unit, indicated that it was appreciable in WLI and BLIbright, but it was one grade higher (much) in LCI.
We selected lesions for which the ΔE value with WLI was less than 3. There were only seven such lesions; however, the average ΔE of these lesions was approximately three times higher with LCI than with WLI (7.18 vs. 2.25) (▶ Table 4). Assessment of the color difference-based NBS unit indicated that it was only noticeable with WLI and BLI-bright, but it was two grades higher (much) with LCI.
The association between the ΔE and background factors in each mode is presented in ▶ Table 5. The presence of H. pylori infection with BLI-bright and severe intestinal metaplasia of the surrounding mucosa with LCI resulted in a significantly higher ΔE.
The color value of a lesion had a significantly higher score than the surrounding mucosa in axes a* and b* in WLI, axis a* in BLI-bright, and axes a* and b* in LCI. However, the color value of the lesion and surrounding mucosa primarily overlapped (▶ Fig. 6). LCI had the most significant difference between the lesion and surrounding mucosa. ▶ Fig. 3 The image of process of color analysis by each mode. WLI, white light imaging; BLI, blue laser imaging; LCI, linked color imaging.
▶ Fig. 4 The CIE L*a*b* color space (CIE: Commission Internationale d'Eclairage). The CIE L*a*b* color space is a color-opponent space with three dimensions: L* (i. e., lightness), a* (i. e., red to green), and b* (i. e., yellow to blue). The color difference between the lesion and surrounding mucosa (i. e., ΔE) is calculated in the L*a*b* space as the distance between 2 points (black double arrow). It approximates the visual differences detected by the human eye.
▶ Table 1 The evaluation criteria of color difference, based on the National Bureau of Standards (NBS) unit. To detect the unique color value of cancer, we analyzed the cut-off point between the lesion and surrounding mucosa in LCI using ROC analysis. The most suitable points of axis a* and axis b* were > 36.5 and > 18.7, respectively. The sensitivity, specificity, and area under the curve were, respectively, 60.4 %, 76.8%, and 0.71 for axis a*; and 81.4 %, 60.5 %, and 0.71 for axis b*. The sensitivity and specificity of the lesions that met both cutoff points were 51.2 % and 86.0 %, respectively.
Discussion
Identifying morphological changes and color differences between a cancer lesion and surrounding mucosa are two primary factors for detecting early gastric cancer. A characteristic of IEE is the enhancement and change in color. In our study, LCI showed a significantly larger color difference between the lesion and surrounding mucosa, compared to WLI. The color difference in LCI was approximately twice as high as that in WLI. According to the NBS, the average calculated color difference yielded by WLI was considered "appreciable or prominent" while that yielded by LCI was considered "much or excessively marked". For lesions with only "noticeable and below" average calculated color difference, LCI yielded three times greater amplification. We were able to more easily distinguish lesions from the normal mucosa with the additional color contrast provided by LCI. Therefore, LCI might be more useful than WLI for the detection of early gastric cancer.
In our study, no remarkable differences between BLI-bright and WLI were noted. Like NBI, the BLI-bright mode uses narrowband short wavelength light and produces red color-enhanced images. Many studies reported the superiority of cancer detection by NBI in the esophagus [8 -9]. We believe this technology is effective in environments with less color variation such as the esophagus. The components of the color difference between the lesion and surrounding mucosa in WLI consists of two dimensions (a* and b*). Therefore, enhancing only a* (i. e., red) was insufficient to demarcate cancers in the stomach.
LCI enabled better color discrimination when there was high intestinal metaplasia in the surrounding mucosa. There is one 1 Kimura-Takemoto Classification C1 -2, mild; C3 -O1, moderate; O2 -3, severe 2 Helicobacter pylori 3 Paris classification, 0-IIa: slightly elevated, 0-IIb: flat, 0-IIc: slightly depressed 4 Histological classification, tub1: well differentiated adenocarcinoma, tub2: moderate differentiated adenocarcinoma, por: poorly differentiated adenocarcinoma ▶ report of using LCI to detect intestinal metaplasia [19]. The mucosa with intestinal metaplasia is white in WLI, but lavender in LCI. The color of white is located at the center of axis a* and b*, but lavender has a negative b*-coordinate. Gastric cancer is primarily red, i. e., a positive b*-coordinate. Therefore, the color difference of the lesions with high intestinal metaplasia in the surrounding mucosa of LCI was greater than that of WLI. The BLI-bright mode showed a significantly greater color difference in the presence of H. pylori infection than that post-infection. In cases of post-infection with H. pylori, the background ▶ Fig. 6 Color value of the lesion and surrounding mucosa. Columns L*, a* and b* indicate the color value of each dimension. The subscript "l" and "s" indicate "lesion" and "surrounding mucosa," respectively. WLI, white light imaging; BLI, blue laser imaging; LCI, linked color imaging.
color occasionally changes to red at the site of atrophic mucosa [31]. In our study, there was atrophy in the surrounding mucosa in most cases, and the color of the lesion was potentially red. The BLI-bright mode only enhanced red color information. Thus, in post-infection cases, the color of the surrounding mucosa and lesion were similar in the BLI-bright mode. The smaller ΔE of post-infection cases in BLI-bright mode caused the significant difference. Patients with chronic gastritis had a varicolored mucosa. By using LCI, the mucosa with subtle color differences with the surrounding mucosa may be highlighted. It is speculated that the detection of false-positive lesions (i. e., noncancerous color-enhanced areas) may be increased. In such a situation, magnified endoscopy with BLI or NBI can differentiate cancerous and noncancerous lesions [11 -14]. For the early detection of gastric cancer, recognizing a prospective lesion is important. If a questionable lesion can be detected, the endoscopist can determine if it is a cancerous or noncancerous lesion by using magnified endoscopy with IEE. Therefore, our results may lead to the early recognition of cancerous lesions.
The color value of the lesion was significantly different from that of the surrounding mucosa in each mode. However, the color values of the lesion and surrounding mucosa greatly overlapped. The cut-off color value between the lesion and surrounding mucosa had a low sensitivity and specificity that was insufficient for clinical use, even when using LCI, which showed the largest difference between the lesion and surrounding mucosa. Based on our data, it was difficult to determine the unique color characteristics of gastric cancer. Therefore, the key factor to detecting a cancerous lesion is the color difference between the lesion and surrounding mucosa, not the unique color.
Recently, Suzuki et al. reported the efficacy of LCI for improving the visibility of flat colorectal lesion compared with WLI and BLI. [32]. That study found an improvement in visibility by non-expert endoscopists, unlike BLI. The higher color difference between the lesion and surrounding mucosa might lead to the improvement of visibility even for non-expert endoscopists. The basic and objective data of the present study seem to support this finding.
Our study has several limitations. First, the images could not be stored concurrently in each mode; therefore, the filming conditions were not precisely identical. However, several images were captured for each case and mode, and the most similar images were selected for this study. Second, it was difficult to verify the accuracy of the border line between the lesion and normal mucosa. The resected specimen was step-sectioned at 2-mm intervals and evaluated by an expert pathologist. The pathologist diagnosed the area of cancer and drew a line encircling the cancer area on the photograph of the resected specimen. Moreover, magnified endoscopic images of the border line were stored in all procedures. Based on these images, the border line was drawn. We believe our strategy is sufficient to analyze the border line between the lesion and the surrounding mucosa. Further, we defined the ROI within a certain range and took the average color value within the area, suggesting that the margin of error is acceptable. Third, many lesions were excluded at the point of image selection in this study because of dissimilarity in the photographic conditions in each mode or the number being insufficient for analysis. This factor may have led to a potential selection bias for lesions that were easily photographed from the front. Fourth, only the cases of differentiated type cancers were included in this study. As we only included patients who were scheduled for endoscopic resection, there was a lack of data regarding undifferentiated tumors, which might also represent a selection bias. Finally, the color of early gastric cancer was not always homogeneous. All lesions were biopsied before the procedure. Thus, external factors may have produced some color change. We excluded the areas of erosion and vivid redness caused by the biopsy inside a lesion. Also, we analyzed color not as a point but as an area. We believe the influence of color heterogeneity was thereby reduced.
Conclusion
In conclusion, this study indicates that the color of early gastric cancer lesions differs from the surrounding normal mucosa, but it is not unique. It is therefore difficult to diagnose early gastric cancer by measuring only the color value. LCI produces the most vivid contrast between cancer lesions and the surrounding mucosa. Thus, LCI may facilitate the early and easy recognition of gastric cancer, even by inexperienced endoscopists. To prove the efficacy of LCI for early gastric cancer detection, randomized, controlled clinical trials are needed.
|
2017-10-28T21:52:15.238Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "33ff1141453567c98b952dfd3334f08839001902",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0043-117881.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2a50317a278a0d101112a5e514d6946c4fffe91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218502578
|
pes2o/s2orc
|
v3-fos-license
|
Adaptive Interaction Modeling via Graph Operations Search
Interaction modeling is important for video action analysis. Recently, several works design specific structures to model interactions in videos. However, their structures are manually designed and non-adaptive, which require structures design efforts and more importantly could not model interactions adaptively. In this paper, we automate the process of structures design to learn adaptive structures for interaction modeling. We propose to search the network structures with differentiable architecture search mechanism, which learns to construct adaptive structures for different videos to facilitate adaptive interaction modeling. To this end, we first design the search space with several basic graph operations that explicitly capture different relations in videos. We experimentally demonstrate that our architecture search framework learns to construct adaptive interaction modeling structures, which provides more understanding about the relations between the structures and some interaction characteristics, and also releases the requirement of structures design efforts. Additionally, we show that the designed basic graph operations in the search space are able to model different interactions in videos. The experiments on two interaction datasets show that our method achieves competitive performance with state-of-the-arts.
Introduction
Video classification is one of the basic research topics in computer vision. Existing video classification solutions can be mainly divided into two groups. The first one is the two-stream network based methods [32,37,10], which model appearance and motion features with RGB and optical flow streams respectively; the second type is the 3D convolution neural networks (CNN) based methods [33,5,36,30,35,27], which model spatiotemporal features with stacked 3D convolutions or the decomposed variants. While these methods work well on scene-based action classification, most of them obtain unsatisfactory performance on recognizing interactions, since they haven't effectively or explicitly modeled the relations.
To model the interactions in videos, some methods employ specific structures [44,17,19] to capture temporal relations. Others model the relations between entities. Nonlocal network [38] and GloRe [8] design networks with self-attention and graph convolution to reason about the relations between semantic entities. CPNet [26] aggregates features from potential correspondences for representation learning. Space-time region graphs [39] are developed to model the interactions between detected objects with graph convolution network (GCN).
However, existing methods have to manually design network structures for interaction modeling, which requires considerable architecture engineering efforts. More importantly, the designed structures are fixed so that they could not adaptively model different interactions. For example, the two videos in Figure 1 contain the interactions with greatly different complexities and properties, i.e. the upper one mainly concerns the motions of the background while the lower one involves complicated relations among objects, where which kind of structures should be used to adequately model the interactions is not completely known in advance, so that it requires to construct adaptive structures for more effective interactions modeling.
Instead of designing fixed network structures manually, we propose to automatically search adaptive network structures directly from training data, which not only reduces structures design efforts but also enables adaptive interaction modeling for different videos. As briefly illustrated in Figure 1, different operations are adaptively selected to construct the network structures for adaptive interaction modeling for different videos, which is implemented by differentiable architecture search. To construct the architecture search space, we first design several basic graph operations which explicitly capture different relations in videos, such as the temporal changes of objects and relations with the background. Our experiments show that the architecture search framework automatically constructs adaptive network structures for different videos according to some interaction characteristics, and the designed graph operations in the search space explicitly model different relations in videos. Our method obtains competitive performance with state-of-the-arts in two interaction recognition datasets.
In summary, the contribution of this paper is two-fold. (1) We propose to automatically search adaptive network structures for different videos for interaction modeling, which enables adaptive interaction modeling for different videos and reduces structures design efforts. (2) We design the search space with several basic graph operations, which explicitly model different relations in videos.
Action and Interaction Recognition
In the deep learning era, action recognition obtains impressive improvements with 2D [32,37,10] or 3D [18,33,30,5,36,35,27] CNNs. 2D CNNs use RGB frames and optical flows as separate streams to learn appearance and motion representations respectively, while 3D CNNs learn spatiotemporal features with 3D convolutions or the decomposed counterparts. Some other works [22,19] learn spatiotemporal representations by shifting feature channels or encoding motion features together with spatiotemporal fea-tures, which achieve high performance and efficiency. As for temporal-based actions, TRN [44] and Timeception [17] design specific structures to model the temporal relations.
To model interactions, Gupta et al. [13] apply spatial and functional constraints with several integrated tasks to recognize interactions. InteractNet [11] and Dual Attention Network [41] are proposed to model the interactions between human and objects. Some other works model the relations between entities for interaction recognition. Nonlocal network [38] models the relations between features with self-attention. CPNet [26] aggregates correspondences for representation learning. GCNs are employed to model the interactions between nodes [39,8]. These specific structures in the above methods are non-adaptive. In practice, however, we do not know what kinds of interactions are contained in videos, and the non-adaptive structures could not sufficiently model various interactions, which requires adaptive structures for effective modeling.
In this work, we propose to automatically search adaptive network structures with differentiable architecture search mechanism for interaction recognition.
Graph-based Reasoning
Graph-based methods are widely used for relation reasoning in many computer vision tasks. For example, in image segmentation, CRFs and random walk networks are used to model the relations between pixels [6,4,21]. GCNs [14,20] are proposed to collectively aggregate information from graph structures and applied in many tasks including neural machine translation, relation extraction and image classification [2,3,29,40]. Recently, GCNs are used to model the relations between objects or regions for interaction recognition. For example, Chen et al. [8] adopt GCN to build a reasoning module to model the relations between semantic nodes, and Wang et al. [39] employ GCN to capture the relations between detected objects.
In this paper, we design the search space with basic operations based on graph. We propose several new graph operations that explicitly model different relations in videos.
Network Architecture Search
Network architecture search aims to discover optimal architectures automatically. The automatically searched architectures obtain competitive performance in many tasks [46,24,47]. Due to the computational demanding of the discrete domain optimization [47,31], Liu et al. [25] propose DARTS which relaxes the search space to be continuous and optimizes the architecture by gradient descent.
Inspired by DARTS, we employ differentiable architecture search mechanism to automatically search adaptive structures directly from training data, which facilitates adaptive interaction modeling for different videos and releases the requirement of structures design efforts. Figure 2. Overall framework. Some frames are sampled from a video as the input to our model. We extract basic features of the sampled frames with a backbone CNN, and extract class-agnostic bounding box proposals with RPN model. Then we apply RoIAlign to obtain the features of proposals and regard them as node features. In the graph operations search stage, we search for a computation cell, where the supernodes are transformed by the selected graph operations on the superedges (see Section 3.2 and 3.3 for details), to construct adaptive structures. The searched structures are used to model the interactions in the corresponding videos. Finally, the node features are pooled into a video representation for interaction recognition.
Proposed Method
In order to learn adaptive interaction modeling structure for each video, we elaborate the graph operations search method in this section. We design the architecture search space with several basic graph operations, where the candidate operations are enriched in addition to graph convolution by several proposed new graph operations modeling different relations, e.g. the temporal changes and relations with background. We further develop the search framework based on differentiable architecture search to search adaptive structure for each video, which enables adaptive interaction modeling for different videos.
Overall Framework
We first present our overall framework for interaction recognition in Figure 2. Given a video, we sample some frames as the input to our model. We extract basic features of the sampled frames with a backbone CNN. At the same time, we extract class-agnostic RoIs for each frame with Region Proposal Network (RPN) [15]. Then we apply RoIAlign [15] to obtain features for each RoI. All the RoIs construct the graph for relation modeling. The nodes are exactly the RoIs, and edges are defined depending on the specific graph operations introduced in Section 3.2, in which different graph operations would indicate different connections and result in different edge weights. To obtain adaptive network structures, we employ differentiable architecture search mechanism to search adaptive structures in which graph operations are combined hierarchically. The interactions are modeled with the searched structures by transforming the node features with the selected graph operations. Finally, the output node features are pooled into a video representation for interaction classification.
In the following subsections, we describe the search space with basic graph operations and the architecture search framework in details.
Search Space with Graph Operations
To search the network structures, we firstly need to construct a search space. We search for a computation cell to construct the network structures, as illustrated in Figure 2. A computation cell is a directed acyclic computation graph with N ordered supernodes ("supernode" is renamed from "node" to avoid confusion with the nodes in the graphs constructed from RoIs). Each supernode contains all the nodes and each superedge indicates the candidate graph operations transforming the node features. In the computation cell, the input supernode is the output of the previous one, and the output is the channel-wise concatenated node features of all the intermediate supernodes.
Each intermediate supernode can be obtained by summing all the transformed predecessors (the ordering is denoted as "N-1", "N-2", "N-3" in Figure 2) as follows, where X (i) , X (j) are the node features of the i-th and jth supernode, and o ij is the operation on superedge (i, j).
Thus the learning of cell structure reduces to learning the operations on each superedge, so that we design the candidate operations in the following. We design the basic operations based on graph for explicit relation modeling. In addition to graph convolution, we propose several new operations, i.e. difference propagation, temporal convolution, background incorporation and node attention, which explicitly model different relations in videos and serve as basic operations in the search space.
Feature Aggregation
Graph convolution network (GCN) [20] is commonly used to model relations. It employs feature aggregation for relation reasoning, in which each node aggregates features from its neighboring nodes as follows, where x j ∈ R Cin is the feature of node-j with C in dimensions, W f ∈ R Cout×Cin is the feature transform matrix applied to each node, a f ij = x T i U f x j is the affinity between node-i and node-j with learnable weights U f , δ is a nonlinear activation function and the z i ∈ R Cout is the updated feature of node-i with C out dimensions. Through information aggregation on the graph, each node enhances its features by modeling the dependencies between nodes.
Difference Propagation
In videos, the differences between objects are important for recognizing interactions. But GCN may only aggregate features with weighted sum, which is hard to explicitly capture the differences. Therefore, we design an operation difference propagation to explicitly model the differences.
By slightly modifying Equation (2), the differences can be explicitly modeled as follows, where the symbols share similar meanings of those in Equation (2). The item (x i − x j ) in Equation (3) explicitly models the differences between node-i and node-j, and then the differences are propagated on the graph, as shown in Figure 3(a). Difference propagation focuses on the differences between nodes to model the changes or differences of objects, which benefits recognizing interactions relevant to the changes or differences.
Temporal Convolution
Nodes in videos are inherently in temporal orders. However, both feature aggregation and difference propagation model the features in unordered manners and ignore the temporal relations. Here we employ temporal convolution to explicitly learn temporal representations.
In temporal convolutions, we firstly obtain node sequences in temporal order. Given node-i in the t-th frame, we find its nearest node (not required to represent the same object) in each frame measured by the inner product of node features and arrange them in temporal order for a sequence, where denote the nearest nodes in frame 0, · · · , T − 1 with reference to the given node x t i . Then we conduct temporal convolutions over the node sequence as shown in Figure 3 where * denotes temporal convolution and W t is the convolution kernel. The temporal convolution explicitly learns the temporal representations to model the significant appearance changes of the node sequence, which is essential for identifying interactions with temporal relations.
Background Incorporation
The node features derived from RoIAlign exclude the background information. However, background is useful since the objects probably interact with the background. This inspires us to design the background incorporation operation. In each frame, the detected objects have different affinities with different regions in the background, as illustrated in Figure 3(c). Denote the feature of node-i in the t-th frame as x t i ∈ R Cin and the background feature map corresponding to the t-th frame as y t ∈ R h×w×Cin . The affinity between x t i and y t j (j = 1, · · · , h × w) can be calculated as The a b ij indicates the relations between the node and the background with spatial structure, which could be transformed into node features, where ∈ R h·w is the affinity vector, and V b ∈ R Cout×(h·w) is the transform matrix transforming the affinity vector into node features.
In addition, the background features can be aggregated according to the affinity a b ij to model the dependencies between detected objects and the background, Finally, the updated node features are the combination of the two features above followed by a nonlinear activation,
Node Attention
The graph contains hundreds of nodes but they contribute differently to recognizing interactions. Some nodes irrelevant to the interaction serve as outliers that interfere the interaction modeling, so it is reasonable to weaken the outliers with attention scheme. The outliers are often the nodes wrongly detected by RPN, which usually have few similar nodes and their similar nodes do not locate regularly at specific regions or along the videos, as briefly illustrated in Figure 3(d). So that we calculate the attention weights according to the similarities and relative positions to the top-M similar nodes.
, a n i = a n ij1 ; a n ij2 ; · · · ; a n ij M , where w i is the attention weight of x i , which is calculated from similarity vector a n i and relative positions ∆s i , σ is the sigmoid nonlinear function, j m is the node index of node-i's m-th similar nodes measured by inner product, and a n ijm is the inner product of node features between node-i and node-j m , and s i = [x i ; y i ; t i ] is the normalized spatial and temporal positions of node-i. With the attention weights, we are able to focus on informative nodes and neglect the outliers.
The graph operations above explicitly capture different relations in videos and serve as the basic operations in the architecture search space, which facilitates structure search in Section 3.3.
Searching Adaptive Structures
With the constructed search space, we are able to search adaptive structures for interaction modeling. We employ differentiable architecture search mechanism in DARTS [25] to develop our search framework, and revise the learning of operation weights to facilitate search of adaptive interaction modeling structures. DARTS. DARTS utilizes continuous relaxation to learn specific operations (o ij in Equation (1)) on the superedges. The softmax combination of all the candidate operations are calculated as the representation of each supernode, where O is the set of candidate operations, o represents a specific operation, α ij o is the operation weight of operation o on superedge (i, j), and theō ij (X (i) ) is the mixed output. In this way, the cell structure learning reduces to the learning of operation weights α ij o . To derive the discrete structure after the search procedure converges, the operation with strongest weight is selected as the final operation on superedge (i, j), Adaptive Structures. Since the interactions differ from video to video, we attempt to learn adaptive structures for automatical interaction modeling. However, the operation weights α ij o in Equation (10) is non-adaptive. So that we modify the α ij o to be adaptive by connecting them with the input video through a fully-connected (FC) layer, in which X is the global feature of input video (global average pooling of the backbone feature) and A ij o is the learnable structure weights corresponding to operation o on superedge (i, j). In this way, adaptive structures are constructed for different videos to model the interactions.
Unlike alternatively optimizing the model in training and validation set to approximate the architecture gradients in DARTS, we jointly optimize the structure weights and the weights in all graph operations in training set to learn adaptive structures. Fixing Substructures. It is time consuming to search stable structures with too many candidate operations. We attempt to reduce the number of basic operations by combining several operations into fixed substructures and regarding the fixed substructures as basic operations in the search space. For example, we connect feature aggregation and node attention sequentially into a fixed combination, and put it after the other 3 graph operations to construct 3 fixed substructures for search (as shown on the superedges in Figure 4).
By this means, we accelerate search by simplifying the search space and also deepen the structures because each superedge contains multiple graph operations.
Diversity Regularization. We find that the search framework easily selects only one or two operations to construct structures, because these operations are easier to optimize. However, other operations are also effective on interaction modeling, so we hope to keep more operations activated in the searched structures. We introduce the variance of operation weights as an auxiliary loss to constraint that all the operations would be selected equally, where The variance loss is added to the classification loss for optimization.
Datasets
We conduct experiments on two large interaction datasets, Something-Something-V1(Sth-V1) and Something-Something-V2(Sth-V2) [12] (see Figure 7 and 8 for some example frames). Sth-V1 contains 108,499 short videos across 174 categories. The recognition of them requires interaction reasoning and common sense understanding. Sth-V2 is an extended version of Sth-V1 which reduces the label noises.
Implementation Details
In the training, we employ stagewise training of the backbone and the graph operations search for easier convergence. And we optimize the weights in all graph operations and the structure weights (A ij o in Equation (12)) alternately to search adaptive structures.
In the structures search stage, we include the zero and identity as additional candidate operations. Following [7], we add dropout after identity to avoid its domination in the searched structures. We use 3 intermediate supernodes in each computation cell. The weight for auxiliary variance loss L var (Equation (13)) is set to 0.1.
More details about the model, training procedure and data augmentation are included in supplementary materials.
Analysis of Architecture Search Framework
In this section, we analyze our architecture search framework. First we compare the interaction recognition accuracy of our searched structures with our baselines, and the results are shown in Table 1. It is observed that our searched structures obtain about 3% improvements over the baselines, i.e. global pooling (global average pooling of the backbone feature) and pooling over RoIs (average pooling over all the RoI features), indicating that the searched structures are effective to model interactions and improve recognition performance. In the following, we show the searched structures and analyze the effects of adaptive structures. In the figure, "feat aggr", "diff prop", "temp conv", "back incor", "node att" represent feature aggregation, difference propagation, temporal convolution, background incorporation and node attention, respectively. Figure 4 shows two examples of the input videos and the corresponding searched structures. From the searched structures we observe that our architecture search framework learns adaptive structures for different input videos. The main differences between the two structures are the superedges entering "N-3", where case1 learns simple structure but case2 selects complicated structure with more graph operations. Perhaps case2 is confusing with other interactions and requires complicated structures to capture some detailed relations for effective interaction modeling.
Searched Structures
Mismatch of videos and structures. To validate the specificity of adaptive structures, we swap the two searched structures in Figure 4 to mismatch the input videos, and use them to recognize the interactions. The results are compared in Figure 5. We observe that the mismatch of videos and structures leads to misclassification, which reveals that different videos require different structures for effective interaction modeling, since different interactions of different complexities are involved.
Analysis of Adaptive Structures
To understand the relations between the adaptive structures and the interaction categories, we statistically analyze the proportion of videos per class corresponding to different searched structures in validation set. Figure 6 compares the This reveals that our architecture search framework learns to roughly divide the videos into several groups according to some characteristics in the interactions, and search specialized structures for different groups for adaptive interaction modeling. In other words, the adaptive structures automatically model interactions in a coarse (groups) to fine (specialized structure for each group) manner.
We further quantitatively compare the interaction recognition accuracy of non-adaptive and adaptive search schemes in Table 1. We make the following observations: On the one hand, adaptive scheme gains better performance than non-adaptive schemes. On the other hand, using only one searched structure for testing leads to obvious performance degradation, since different structures are searched to match different groups during training but only one struc- ture is used for testing, which is insufficient to model the interactions in all groups. These observations further indicate the effectiveness of the adaptive structures. We also validate that learning with fixed substructures gains slight improvements, diversity regularization helps to learn structures with multiple operations, and the adaptive structures can transfer across datasets. For more details, please refer to our supplementary materials.
Analysis of Graph Operations
In this section, we analyze the role of each graph operation in interaction modeling. Firstly, we compare the recognition accuracy of different operations by placing them on top of the backbone, and the results are shown in Table 2. It is seen that all the operations improve the performance over baselines, indicating that explicitly modeling the relations with graph operations benefits interaction recognition. Different graph operations gain different improvements, which depends on the significance of different relations in the datasets. In the following, we visualize some nodes and cases to demonstrate the different effects of different graph operations in interaction modeling. Top activated nodes. We visualize the nodes with top affinity values of some operations for the same video in Figure 7. The feature aggregation focuses on the apparently similar nodes to model the dependencies among them as shown in Figure 7(a). On the contrary, the difference propagation models the significant changes of some obviously different nodes in Figure 7(b). In Figure 7(c), the nodes with high attention weights are the hand or the bag, and the nodes with low attention weights are some outliers, which indicates that the node attention helps to concentrate on important nodes and eliminate the interference of outliers. Successful and failed cases. We show some successful and failed cases to indicate the effects of different operations in Figure 8. In Figure 8(a), the feature aggregation successfully recognizes the interaction due to the obvious dependencies between the paper and the mug. However, it fails when detailed relations in Figure 8(b) and 8(c) are present. In Figure 8(b), the difference propagation and the temporal convolution could capture that the lid is rotating so that they correctly recognize the interaction. In Figure 8(c), the background incorporation is able to capture the relations between the towel and the water in the background so that it makes correct prediction, but other operations ignoring the background information are hard to recognize such an interaction with the background.
More case study and analysis about graph operations are included in supplementary materials.
Comparison with State-of-the-arts
We compare the interaction recognition accuracy with recent state-of-the-art methods, and the results are show in Table 3. Except for STM [19], our method outperforms other methods, which indicates the effectiveness of our method. We model the interactions with adaptive structures, which enhances the ability of interaction modeling and boosts the performance.
Among the recent state-of-the-arts, I3D+GCN [39] also uses graph operation over object proposals to recognize interactions. Our method surpasses it with a margin about 7%, perhaps because we have trained a better backbone with our data augmentation techniques (see Section 4.2 for details), and our adaptive structures with multiple graph operations learn better interaction representations. STM [19] proposes a block to encode spatiotemporal and motion features, and stacks it into a deep network, which obtains better performance on Something-something-V2 dataset than ours. However, we adaptively model interactions with different structures, which provides more understanding about the relations between the interactions and the corresponding structures, instead of only feature encoding in STM. In addition, our structures are automatically searched, which releases the structures design efforts.
Conclusion
In this paper, we propose to automatically search adaptive network structures for interaction recognition, which enables adaptive interaction modeling and reduces structures design efforts. We design the search space with several proposed graph operations, and employ differentiable architecture search mechanism to search adaptive interaction modeling structures. Our experiments show that the architecture search framework learns adaptive structures for different videos, helping us understand the relations between structures and interactions. In addition, the designed basic graph operations model different relations in videos. The searched adaptive structures obtain competitive interaction recognition performance with state-of-the-arts.
Supplementary Material for:
Adaptive Interaction Modeling via Graph Operations Search 1. Additional Results
Results on Test Set
We report the results on validation set in our paper for comprehensive comparison, because most of other methods only report validation results due to the withheld test labels. In Table 1
Model Size and Inference Time
In Table 2, we evaluate the number of parameters and MACs (multiplyaccumulate operations) of our model using a public available tool (https://github.com/ sovrasov/flops-counter.pytorch). Our method would not increase the model size too much (comparable with NonLocalI3D), but still boost performance.
Model
Params In terms of inference time, our framework takes around 0.12 seconds per video with 32 sampled frames on a single GTX 1080TI GPU, which is still towards real time and would not lead to excessive latency.
Implementation Details
Backbone. We use I3D-ResNet [38,39], which inflates 2D convolution kernels into 3D kernels for initialization [5], as our backbone (Table 3) to extract basic features. It is inflated from ResNet-50 [16] with ImageNet [9] pretrained parameters, and it extracts video features after "res5" with 2048 channels from 32 uniformly sampled frames. For computation efficiency in graph reasoning, we reduce the feature dimension from 2048 to 256 with a FC layer. layer output size conv1 5×7×7, 64, stride 1,2,2 32×112×112 pool1 3×3×3, max, stride 1,2,2 32×56×56 RPN. We use RPN [15] model with ResNet-50 and FPN to extract region proposals. The RPN model is pre-trained on the MSCOCO object detection dataset [23]. To match the output time dimension of the "res5" feature maps, we sample 16 frames from 32 input frames (1 frame every 2 frames) to extract region proposals. Top 10 class-agnostic object bounding boxes are extracted for each frame. Graph operations settings. The shapes of all the transform matrices W * , V * in the graph operations are set as 256 × C in , where C in differs from operation to operation. The shapes of the affinity weights U * are set as C in × C in . The affinity matrices are row-normalized to keep the sum of the affinities connected to each node to be 1. The size of temporal convolution kernel W t is set to 7. We employ Layer Normalization [1] followed by LeakyReLU as the nonlinear activation function of each operation.
Training. We train our model in the following steps: 1. Train the backbone on the target datasets.
2. Fix the backbone. Train the weights in all the graph operations and the structure weights alternatively to learn adaptive structures until the adaptive structures are stable. The SGD optimizer is used for the weights in all the graph operations, and the Adam optimizer is used for the structure weights. The learning rate of the weights in all the graph operations is 0.01, and the learning rate of the structure weights is 0.0001. 3. Fix the structure weights. Train the weights in all the graph operations with discrete structures. SGD optimizer is used and the learning rate is set to 0.001. The learning rate is divided by 10 when the validation loss doesn't decline for 5 epochs. The training is stopped when the validation loss doesn't decline for 5 epochs with learning rate 0.0001. 4. Train the weights in all the graph operations and the backbone jointly. SGD optimizer is used and the learning rate is 0.0001.
Testing. On the testing stage, we sample 5 clips in each video and use the mean score for classification. Data augmentation.We divide the video into 32 segments and randomly sample one frame in each segment, in order to obtain different samples from the same video for augmentation. We also randomly crop and horizontally flip all the sampled frames of the same video. It should be noted that some categories are relevant to directions, so that we do not apply horizontal flipping to these videos. Table 4 compares the performance of learning with original graph operations and fixed substructures. It is observed that learning with fixed substructures obtains higher accuracy, perhaps because it simplifies the optimization with fewer structure weights and also implicitly deepens the structures. In addition, learning with fixed substructures converges faster than learning with original graph operations in our experiments, which reduces the searching time. Table 4. Interaction recognition accuracy (%) comparison of different search space. "Ori Ops" means original graph operations are used as basic operations in the search space, and "Fixed Subs" means the fixed substructures are used as basic operations to search the structures.
Diversity Regularization
To validate the effect of diversity regularization, we search non-adaptive structures with original graph operations on Something-Something-V1 dataset and compare the searched structures without and with diversity regularization (variance loss in Equation (13) in our paper) in Figure 1. It is observed that the structure learned without variance loss tends to only select "node attention", which hampers complex relation modeling and also obtains unsatisfactory performance (Table 5). In contrast, the structure learned with variance loss selects diverse graph operations, which enhances the ability of interaction modeling and gains better recognition results. Therefore, we all use diversity regularization in other experiments.
Transferability of Adaptive Structures
In order to verify the transferability of the adaptive structures, we learn the adaptive structures on one dataset, and then fix the structure weights and train the rest learnable weights on the other dataset. The results are shown in Table 6. It is observed that the interaction recognition performance does not decline obviously (0.4% for Something-Something-V1 dataset and 0.1 % for Something-Something-V2 dataset), which indicates that the adaptive structures can transfer across datasets with minor performance degradation.
We further show the proportion of videos per class corresponding to some structures in the original and transferred datasets. Figure 2 to Figure 4 show three examples of the structure and its corresponding interaction category distributions in the original dataset (Something-Something-V1) and transferred dataset (Something-Something-V2). According to the category index and the label lists, we observed that the dominant interaction categories are semantically similar in the two datasets. In both the original dataset and the transferred dataset, the dominant categories are about camera motion, pushing/poking/throwing something, moving something away or closer in the three ex- amples respectively. These results fully illustrate that the adaptive structures are learned according to some interaction characteristics, which can transfer across datasets and also help us understand the relations between the structures and the interactions.
Computation Cell
We compare the performance of the computation cells with different number of intermediate supernodes, and the results are shown in Table 7. It is observed that computation cells with 3 and 4 intermediate supernodes obtain better performances than that with 2 intermediate supernodes.
Deeper Structures
We attempt to stack multiple searched computation cells to construct deeper structures. Table 8 compares the results with 1 and 2 stacked cells. It is observer that deeper structures with 2 stacked cells gain no improvements, perhaps due to the gap between search and evaluation [7] and overfitting. Perhaps, search with multiple stacked cells could boost the performance but it leads to heavy computation consumption. For simplicity, we use only 1 cell in our experiments.
More Successful and Failed Cases
We show more successful and failed cases to indicate the effects of different operations in Figure 5.
The feature aggregation successfully recognizes the simple interaction in (a), but it fails in case (b), (c), and (d) because some detailed relations need to be modeled with other operaiotns.
In case (b), the feature aggregation easily recognizes the interaction as "Letting something roll down a slanted surface". The key to distinguish the two interactions is the differences between rolling and sliding. The difference propagation focuses on the differences between detected objects, which enables it to capture the changes of the sliding down object so that it successfully classifies the interaction as sliding.
In case (c), the feature aggregation mistakenly recognizes the interaction as "Moving something and something closer to each other" since the two objects are indeed closing. However, the key to identify is whether the two objects are both moved. The temporal convolution aims to capture the evolution of the interaction and it could observe that one of the objects is always static, which makes the interaction distinguishable.
As for case (d), something is pushed but did not fall down but the feature aggregation misclassifies it as "Pushing something off of something". The background would change dramatically if the box falls off, so the background incorporation modeling the relations between the nodes and the background helps to identify the action. On the contrary, those operations relying on detected objects easily fail because the table is not detected by RPN model.
However, there are still many cases that are commonly misclassified by different graph operations and the searched structures, such as the example showed in Figure 5(e). The poor quality frames prevent any effective modeling based on RGB inputs. What's more, some confusing labels and incorrect detected object bounding boxes also hinder further improvements of interaction modeling, which need to be addressed in the future.
Accuracy of Each Graph Operation
To show the effects of different graph operations on different interaction categories, we compare the recognition accuracy of each graph operation on some interaction categories where different operations obtain quite different performances. The results are shown in Figure 6. It is observed that different operations perform differently on the same interaction category, since they tend to model different relations in videos. We can also observe that the different propagation and the temporal convolution generally work well on the interactions with detailed changes, such as spilling something, moving something slightly and rolling something, and the background incorporation would work well on some interactions with relations between different objects and the background, such as positional relations and relations to the surfaces.
|
2020-05-06T01:00:54.807Z
|
2020-05-05T00:00:00.000
|
{
"year": 2020,
"sha1": "5c02af517dff2ff0d34dee04535731b7d252bbbb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2005.02113",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5c02af517dff2ff0d34dee04535731b7d252bbbb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
54637237
|
pes2o/s2orc
|
v3-fos-license
|
A historical assessment of sources and uses of wheat varietal innovations in South Africa
FUNDING: Bill and Melinda Gates Foundation We undertook a historical review of wheat varietal improvements in South Africa from 1891 to 2013, thus extending the period of previous analyses. We identified popular wheat varieties, particularly those that form the basis for varietal improvements, and attempted to understand how policy changes in the wheat sector have affected wheat varietal improvements in the country over time. The empirical analysis is based on the critical review of information from policies, the varieties bred and their breeders, the years in which those varieties were bred, and pedigree information gathered from the journal Farming in South Africa, sourced mainly from the National Library of South Africa and the International Maize and Wheat Improvement Center (CIMMYT) database. A database of the sources and uses of wheat varietal innovations in South Africa was developed using information from the above sources. The data, analysed using trend and graphical analysis, indicate that, from the 1800s, wheat varietal improvements in the country focused on adaptability to the production area, yield potential and stability and agronomic characteristics (e.g. tolerance to diseases, pests and aluminium toxicity). An analysis of the sources of wheat varietal improvements during the different periods indicates that wheat breeding was driven initially by individual breeders and agricultural colleges. The current main sources of wheat varietal improvements in South Africa are Sensako, the Agricultural Research Council’s Small Grain Institute (ARC–SGI) and Pannar. The structural changes in the agricultural sector, particularly the establishment of the ARC–SGI and the deregulation of the wheat sector, have helped to harness the previously fragmented efforts in terms of wheat breeding. The most popular varieties identified for further analysis of cost attribution and the benefits of wheat varietal improvements were Gariep, Elands and Duzi.
Introduction and background
The driving factors for investment in crop varietal innovations include the need to improve (1) yield potential, (2) resistance/tolerance to biotic and abiotic stresses and (3) nutritional and processing quality. 1,2 Greater investments in agricultural research and development (R&D), particularly varietal innovations, are necessary to increase and sustain agricultural productivity, as well as to address challenges such as poverty, food security, adaptation to climate change, increased weather variability, water scarcity and the volatility of prices in global markets. 3 The World Bank's 2008 Development Report argued that productivity gains through innovations that address increasing scarcities of land and water remain the main source of growth in agriculture and a primary source of increased food and agricultural production to feed the increasing demand. Innovations such as crop varietal improvements need to focus beyond raising productivity to addressing additional challenges such as water scarcity, risk reduction, improved product quality and environmental protection.
Du Plessis 4 reported that the first wheat production in South Africa occurred in the winter of 1652 when Jan van Riebeeck planted the first winter wheat. This development in the 1600s was the foundation of all wheat production and subsequent breeding programmes to date. Despite the first production of wheat in the 1600s, wheat varietal breeding was reported to have been established more than two centuries later in 1891. 5 The focus of wheat varietal improvements in South Africa addresses the following cultivar characteristics: adaptability to the production area, yield potential and stability, and agronomic characteristics (e.g. tolerance to diseases, pests and aluminium toxicity). The wheat varietal improvement sector consists of three main actors: the Small Grain Institute of the Agricultural Research Council (ARC-SGI; established in 1976 as the then Small Grains Centre); Sensako, established in the mid-1960s (becoming autonomous in 1999 after functioning as part of Monsanto); and Pannar (entering the wheat breeding sector in the 1990s). 5 Periodic assessment of plant breeding is required to assess the benefits of ongoing investment to allow: (1) temporary constraints that could permanently hinder the identification of crop varietal improvements to be addressed; and (2) desirable characteristics -such as quality, quantity and environmental impact -to be identified and prioritised. 6 The main objective of this study was to undertake a historical assessment of the sources and uses of wheat varietal innovations in South African agriculture. Specifically, we focused on the historical evolution of wheat varietal improvements in the country between 1891 and 2013, including the identification of popular varieties and their history, sources and uses. This assessment complements earlier efforts by Smit et al. 5 , Van Niekerk 7 , De Villiers and Le Roux 8 and Stander 9 , firstly by extending the period of analysis from early breeding periods in the early 1900s to 2013. Furthermore, the current empirical analysis is critical in helping to identify popular wheat varieties that have been bred and grown for long periods (particularly among current varieties in the market). These varieties form the basis for analysing wheat varietal improvements in South Africa, which is the focus of a forthcoming paper in which further analysis looks at the parental history of the selected varieties from the current analysis, and develops an empirical model for the attribution of costs and benefits of wheat varietal improvements in South Africa.
Wheat production in South Africa
The South African Department of Agriculture, Forestry and Fisheries 3 reported that the precise origin of wheat is not known, but there is evidence that the crop evolved from wild grasses somewhere in the Near East. Wheat is reported to have likely originated from the Fertile Crescent in the upper reaches of the Tigris-Euphrates drainage basin. Commercial wheat production in South Africa started in the early 1910s with varieties brought by the Dutch traders to Cape Town (then the Cape of Good Hope). Wheat is the second most important grain crop produced in South Africa after maize. In South Africa, the main uses of wheat are human consumption (especially for making flour for the bread industry), industrial (important sources of grain for alcoholic beverages, starch and straw), and animal feed (bran from flour milling as an important source of livestock feed, grain as animal feed). 3 There are two basic types of commercially cultivated wheat in South Africa, which differ in genetic complexity, adaptation and use: (1) bread wheat (Triticum aestivum) and (2) durum wheat (Triticum turgidum). Durum wheat was derived from the fusion of two grass species some 10 000 years ago, whereas bread wheat was derived from a cross between durum wheat and a third grass species about 8000 years ago. 10 Bread wheat and durum wheat are used to make a range of widely consumed food products; for example, bread wheat is processed into leavened and unleavened breads, biscuits, cookies and noodles and durum wheat is used to make pasta (mainly in industrialised countries), as well as bread, couscous and bulgur (mainly in the developing world). 10 South Africa mainly produces bread wheat; durum wheat represents a very small percentage of the annual wheat production in the country.
Wheat is produced in 32 of South Africa's 36 crop production regions. The main wheat-producing provinces are the Western Cape (winter rainfall), Free State (summer rainfall) and Northern Cape (irrigation).
Mpumalanga (irrigation) and North West (mainly irrigation) are other important wheat-producing provinces. 11 The annual wheat production in South Africa ranges from 1.5 to 3 million tonnes, with productivity rates of 2-2.5 tonnes/ha under dryland ( Figure 1) and at least 5 tonnes/ ha under irrigation. However, wheat production has been decreasing in recent years. Smit et al. 5 argue that efficiency, productivity and quality in wheat production has increased over time and some of the contributing factors include research efforts from various disciplines such as plant breeding, agronomy, crop physiology and crop protection. For example, the productivity levels for dryland wheat have increased from less than 0.5 tonnes/ha in 1936 to more than 3.5 tonnes/ha in 2015 ( Figure 1). 12 A study by Purchase 13 reported an 87% improvement in yield and a 20% improvement in baking quality between 1930 and 1990. Local production of wheat mainly comes from the Western Cape (contributing about 650 000 tonnes), Free State (580 000 tonnes), Northern Cape (300 000 tonnes), North West (162 000 tonnes) and Mpumalanga (92 000 tonnes). South Africa is a net wheat importer and imports about 300 000 tonnes of wheat per annum. 3 Wheat production in South Africa occurs in both summer and winter rainfall regions. Most of the production (at least 50%) happens under dryland conditions. In the summer rainfall region, at least 30% of the total harvest is produced under irrigation. 14 Production under irrigation has a higher yield potential than dryland wheat production. Dryland productivity in South Africa is very low compared to that of the major wheat-producing countries in the world. Pannar 14 attributes the 'slower than expected progress in yield increases of local breeding programmes' to stringent quality requirements for new varieties, as well as variable climatic conditions (including dry, warm winters), low soil fertility, new diseases such as yellow/stripe rust (Puccinia striiformis) emerging in 1996 and the emergence of new pathotypes, the introduction of the Russian wheat aphid in 1978, and a new biotype in 2005. These factors caused wheat breeding programmes to 'discontinue many promising germplasm lines' 14 despite their highly promising yield potential, as they were susceptible to new diseases and pests. The focus in wheat breeding shifted to producing varieties resistant in terms of specific
Evolution of crop production and breeding in agriculture
Various studies have reviewed the historical changes and evolution of crop production and breeding. Examples of these studies include those of Chigeza et al. 6 , Byerlee and Moya 15 , Heisy et al. 16 , Grace and Van Staden 17 and This et al. 18 Here we briefly review these studies to understand the approaches used and some of the major findings and their implications for this paper.
In a study focusing on analysing the impact of international wheat breeding research in the developing world between 1966 and 1990, Byerlee and Moya 15 analysed the origins and trends of varieties released by national agricultural research systems (NARS) of 38 collaborating countries. The analysis of wheat varieties released by NARS included the listing of over 1300 varieties and information of their pedigrees, ecological niches and area planted. The information was used to estimate the benefits of wheat breeding on genetic yield and changes in traits such as disease resistance and quality. Byerlee and Moya 15 found an increasing proportion (84% by 1986-1990) of spring bread wheat varieties originating directly from varieties of the International Maize and Wheat Improvement Center (CIMMYT) or those with a CIMMYT parent, especially among small NARS. They also found that larger NARS used their own crosses to develop more than half of the varieties released. The analysis of wheat releases by NARS with respect to type of variety (winter bread wheat and durum wheat) and growth habit was also done for every 5-year period between 1966 and 1990. In this study we followed a similar approach to develop a comprehensive database of wheat varietal improvements in South Africa.
Smit et al. 5 summarised wheat cultivars released in South Africa between 1983 and 2008. The current study extends the analytical period to 1891 and 2013. In addition, we build on these earlier efforts to compile a comprehensive database that forms the basis for estimating the benefits and costs attributed to wheat varietal improvements in South Africa.
Another addition in the current study is the provision of the institutional evolution of wheat breeding which was not included in Smit et al.'s 5 paper. Furthermore, the focus of Smit et al. 5 was more agronomic, while the current paper focuses more on the economics side of wheat-breeding developments over the study period. Also, despite listing varieties released from 1983 to 2008, Smit et al. 5 do not provide a detailed historical evolution of wheat varietal improvements in the country.
Data and research methods
The empirical analysis is based on the critical review of information from policies, varieties bred and their breeders, years when varieties were bred, and pedigree information, as gathered from the journal Farming in South Africa, sourced mainly from the National Library of South Africa and the CIMMYT database. The focus was to identify the sources (institutions and individuals) of wheat varietal improvement innovations; where the innovations were used (areas where the wheat varieties were grown); factors driving the innovations; and the types of wheat varietal innovations. The study analysed the wheat varieties released and/or introduced in South Africa during the period 1891-2013. A database of sources and uses of wheat varietal innovations in South Africa was developed using information from the above sources. The database shows that the wheat in South Africa has been a subject of breeding endeavours for more than two centuries, and wheat varietal improvement has rapidly expanded, particularly in the past four decades.
Based on previous studies 6, [15][16][17][18] , the data were analysed using trend and graphical analysis. The analysis also considered geographical region/ area, as well as wheat type and growth habit. Although the database is incomplete and undoubtedly contains errors, it is to date the most comprehensive database available on the history of wheat varietal improvement in South Africa. This database will form the basis for further analysis focusing on the attribution of wheat varietal improvements in South Africa and their costs and benefits.
Liebenberg and Pardey 19 discussed historical evolution in order to document and describe major developments in the agricultural sector over the 20th and early-21st centuries and the changing policy and institutional environment of public support for agricultural R&D in South Africa. The article by Liebenberg and Pardey 19 was used to set the historical and policy context for further analysis, and the quantification and consideration of changes in public agricultural R&D investments between 1880 and 2007. We use the approach of Liebenberg and Pardey to discuss historical changes in the wheat sector and how they have shaped varietal improvements over the years.
Key developments and early history of wheat varietal improvements
Wheat production was first initiated in South Africa by Jan van Riebeeck during the winter of 1652 at the Cape of Good Hope. 4 Wheat production subsequently expanded, and by 1684 there were some exports to India. 7 However, South Africa is currently a net importer of wheat. The original cultivars produced at that time originated from Europe and the East Indies, and were brought by the early settlers and trading vessels. The selection criteria for the new varieties were then focused on adaptability to the new environment, such as resistance to stem rust, periodic droughts and wind damage. Table 1 summarises the key developments (institutional and policy) throughout the history of wheat varietal improvements in South Africa from the 1600s.
The first wheat breeding programme reportedly began in 1891 in the Western Cape Province. 20 The initial series of artificial crosses between varieties was conducted in 1902 and 1904 to retain the successful resistance of Rieti wheat while replacing its poor milling quality and tendency to shed grain prior to harvesting. 7 The SGC was established as a Research Centre of the Highveld Region of the Department of Agriculture. The main objective of the SGC (now the Small Grains Institute) was to help improve production of small grains, including addressing production challenges, investigating new production possibilities and transferring of information to strategic points.
Establishment of Agricultural Research Council (ARC)
The establishment of the ARC in 1992 centralised all national agricultural research functions, including the mandate to serve historically segregated homeland areas.
1996
Deregulation of wheat sector Marketing of Agricultural Products Act, Act 47 of 1996, which led to the deregulation of the wheat sector, has had a significant impact on both wheat research and the industry.
Sources: Various
College of Agriculture. 7 Between 1950 and 1959, four wheat varieties were released, with only two making an impact on the wheat industry: Daeraad (Unie52A/Kruger) and Dromedaris (Hope/Gluretty). Neethling's retirement and the resultant break in continuity, coupled with increased interest from his successors in terms of using wide crosses, is arguably the reason for the limited activity in terms of varietal releases during this period.
After taking over from Neethling The discussion above indicates that wheat varietal improvements in the early years of wheat breeding were specific to the production area, with little or no movement from one area to another. This situation has changed over time, and wheat breeding companies -although they focus on wheat varieties specific to the different wheat-growing regions of the country -aim to produce varieties that are adaptable across the country. According to the World Bank 21 , there was little movement of genetic improvement technologies in the 1950s and 1960s, especially from the temperate North to the tropical South. Their report further argues that the focus on adapting improved varieties to subtropical and tropical regions since the 1960s has generated high payoffs and pro-poor impacts, which are expected to continue to grow with rapid advances in biological and informational sciences. Byerlee and Moya 15 also found that the initial focus on CIMMYT wheat breeding activities was on specific environments (particularly irrigated areas in Mexico and South Asia), which were later expanded to rain-fed areas to incorporate resistance to diseases such as septoria (Septoria spp.) and stripe rust (Puccinia striiformis f. sp. tritici) into CIMMYT germplasm. The further incidence of pests and diseases challenged CIMMYT to widen the focus on resistance to pests and diseases in different environments. Similarly, in South Africa, structural changes in the agricultural sector and the liberalisation of the wheat sector have also opened up the rapid growth of wheat breeding improvements that transcend beyond original regional production areas.
Establishment of ARC-SGI and wheat varietal improvements
The ARC-SGI was established in 1975 as the Small Grains Centre. The SGI was aimed at harnessing the impact of the then-fragmented research efforts (especially small-grain breeding programmes in the then Cape, Transvaal and Orange Free State Provinces) into an organisation running along the lines of CIMMYT following the recommendation of Dr Borlaug to the Department of Agriculture. 7,8,19 The Small Grains Centre was established as a research centre of the Highveld Region of the Department of Agriculture. The main objective was to help improve the production of small grains, including addressing production challenges, investigating new production possibilities and transferring information to strategic points. The SGI became an autonomous institute on 1 April 1995.
The Wheat Board, through motivations by Dr Jos de Kock, provided funding for a new Research Building in 1989 for the centre. De Villiers and Le Roux 8 report that 90% of the infrastructure at the ARC-SGI was funded by the Wheat Board and indirectly by wheat farmers. In an effort to harness fragmented research efforts, the SGI, since its establishment in 1975, has managed to initiate the following: a national seed multiplication scheme, a national cultivar evaluation scheme and breeding of cultivars that were nationally coordinated from Bethlehem (SGI supplied at least 65% of all nationally bred cultivars up to 1996). 8 The wheat variety improvements released by the ARC-SGI were started in 1975, and its contribution to wheat breeding remains very important to South Africa. The World Bank 21 argues that in areas where markets fail and it is difficult to appropriate benefits, public investments are required in agricultural R&D, such as wheat varietal improvements.
Wheat varietal releases, sources and uses
The sources and uses of wheat varietal innovations are presented by geographical region/area and wheat growth habit. In addition, we discuss varietal improvements by wheat breeding structural/policy shifts: before the establishment of the ARC-SGI; after the establishment of ARC-SGI to deregulate the wheat sector in 1996; and post-deregulation (1997-2013). Analysis of the wheat varietal improvement breeders is taken further by organisation type: local private companies such as Sensako and Pannar; local public organisations such as the ARC-SGI and universities; local individuals; foreign private companies; and foreign public organisations ( Figure 3). The results show that the local private sector, with a total of 171 wheat varieties, has the highest share of varieties released in South Africa for the period under study. Local public organisations, which include the ARC-SGI and universities, trail with 72 wheat varieties -less than half that of the local private sector. Results from Figures 2 and 3 clearly show that the private sector currently dominates wheat varietal improvements in the country. The current funding challenges in the public sector mean that the private sector will continue to dominate wheat varietal improvements. However, more effort would be required to support research that caters for all types of farmers, especially the emerging farmers who would want to grow wheat. This means that the public sector has a critical role to play in this area, in addition to releasing varietal innovations to large commercial farmers. The low rate of wheat varietal release in the late 1970s and early 1980s could have been driven by reduced government funding for all non-security departments in favour of increased demands for military support. 19 The introduction of the Marketing of Agricultural Products Act in 1996 led to the dissolution of the Wheat Board, which affected the funding originally provided by the Board for wheat varietal improvements in the country. The establishment of the ARC in 1992 centralised all national agricultural research functions, including the mandate to serve historically segregated homeland areas. The varieties were also analysed by the geographical area for which they were released.
Conclusions and recommendations
Wheat varietal innovations are important in agriculture, as they help to improve crop productivity, adaptability and resistance to pests and diseases, and also help to protect the environment. The main objective of this paper was to examine the historical evolution of wheat varietal improvements in the country, including the identification of popular varieties, and their history, sources and uses from 1891 to 2013.
About 501 varieties were released from wheat varietal innovations in South Africa between 1891 and 2013. From the 1800s, wheat varietal improvements in the country focused on addressing: adaptability to production area; yield potential and stability; and agronomic characteristics (e.g. tolerance to diseases, pests and aluminium toxicity). The main sources of wheat varietal improvements in South Africa are Sensako, ARC-SGI and Pannar. In terms of growth habits, most wheat varietal improvements have focused on spring and winter wheat varieties grown mostly under dryland conditions. Analysis by geographical area indicates that most of the wheat varieties released between 1891 and 2013 were for the Western Cape and Free State Provinces, which are the major wheat-producing areas in the country. Wheat varietal improvements in the early years of wheat breeding were decentralised and specific to the production area, with little or no movement from area to area. The structural changes that have occurred in the agricultural sector, particularly the establishment of the ARC-SGI and the deregulation of the wheat sector, have contributed to the effort to harness the impact of the existing fragmented research efforts, especially small-grain breeding programmes in the former Cape, Transvaal and Orange Free State Provinces.
Wheat breeding was initially driven by individual breeders and agricultural colleges. Since its establishment, Sensako has been the main source of wheat varieties, followed by the ARC-SGI and Pannar. The most popular varieties identified for further analysis, in terms of the attribution of costs and benefits of wheat varietal improvements, are Gariep, Elands and Duzi. The findings from this paper form the basis for a forthcoming paper focusing on the attribution of benefits and costs in terms of investment in wheat breeding in South Africa. Table 3 presents the distribution of wheat varieties released by growth habit. In the period 1891-1975, most of the wheat varietal releases focused on spring (13.80%); irrigation (9.85%) and winter (8.87%) growth habits. Spring (17.24%) and facultative growth habits dominated wheat varietal improvements in the period 1976 and 1996. Since the deregulation of the wheat market in 1996, spring, winter and facultative growth habits have dominated wheat varietal improvement research in South Africa. Liebenberg and Pardey 19 argue that initial agricultural R&D was decentralised and focused on specific environments and patterns of agricultural production. Specifically, the five agricultural colleges then focused their efforts on the main farming enterprises within their respective agro-ecological regions -for example, Elsenburg focused on winter grains. However, this situation has been greatly transformed over time, and public agricultural R&D is now more nationally centralised but has been experiencing a declining trend in recent years, at least since the deregulation of the wheat sector.
The analysis of wheat varietal releases was further divided into three distinct periods: the first comprising wheat varietal improvements prior to the establishment of the ARC-SGI; the second development from the establishment of the ARC-SGI to the deregulation period of the wheat sector; and the third period the post-deregulation period (1997 to 2013).
|
2018-12-12T07:42:37.210Z
|
2017-03-29T00:00:00.000
|
{
"year": 2017,
"sha1": "fe83e0cfba0e766e3ce8b5b3c162f85aeb3b0582",
"oa_license": "CCBY",
"oa_url": "https://www.sajs.co.za/article/download/3665/4772",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fe83e0cfba0e766e3ce8b5b3c162f85aeb3b0582",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"History",
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
}
|
247329687
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis and Crystal Structure of 9,12-Dibromo-ortho-Carborane
Synthesis, NMR spectral data and crystal structure of 9,12-dibromo derivative of ortho-carborane
In this contribution we describe the synthesis of 9,12-dibromo-ortho-carborane and its characterization by NMR spectroscopy and single crystal X-ray diffraction.
Results and Discussion
Despite the fact that the bromination of orthoand meta-carboranes was first described back in the mid-1960s [44], neither the yield of bromination products nor their characterization (with the exception of X-ray diffraction data for crystals from the same syntheses [50][51][52][53]) have been described until recently. For the sake of fairness, it is worth noting an attempt to characterize the obtained bromo derivatives of ortho-carborane using 11 B NMR spectroscopy, however, due to the very limited instrumental capabilities of that time, at the present it is rather of historical interest [54]. Synthesis and NMR spectra of 9-bromo-and 9,12-dibromo-meta-carboranes were recently reported by Spokoyny et al. [45]. The NMR spectral data of 9-bromo-ortho-carborane, as well as its crystal and gas phase structures, were recently reported by Hnyk et al. [49,55]. As for 9,12-dibromo-ortho-carborane, its The NMR spectral data of 9-bromo-ortho-carborane, as well as its crystal and gas phase structures, were recently reported by Hnyk et al. [49,55]. As for 9,12-dibromo-ortho-carborane, its preparation was also mentioned relatively recently [56]; however, only numerical characteristics of the NMR spectra were reported without their assignment.
The main problem of the 9,12-dibromo-ortho-carborane synthesis is the purification of the target product. It was demonstrated that bromination of ortho-carborane, regardless of the Lewis acid and solvent used, gives, together with the desired 9-bromo-ortho-carborane, approx. 10 mol.% of 8-bromo-ortho-carborane. At the second stage, this leads to the crude product containing approx. 80% of 9,12-dibromo-ortho-carborane, together with significant amount of the 8,9-dibromo and traces of the 8,10-dibromo derivatives [57]. Impurities of 9-bromo-and 8,9,12-tribromo derivatives may also be present in the reaction mixture, which greatly complicates the purification of the target product [58]. Unfortunately, all our attempts to purify the target compound using chromatography methods failed. Therefore, we purified 9,12-dibromo-ortho-carborane by fraction crystallization from chloroform that produced a rather low (22%) yield of pure product (Scheme 1). It should be noted that the structure of 9,12-dibromo-ortho-carborane was determined in 1966 [50] at room temperature. The quality of that experiment was evidently low and was mostly concentrated on the description of molecular geometry. Therefore, in the present study, we redetermined its structure at low temperature (110 K) focusing on both molecular structure ( Figure 1) and, especially, the crystal packing. It should be noted that the structure of 9,12-dibromo-ortho-carborane was determined in 1966 [50] at room temperature. The quality of that experiment was evidently low and was mostly concentrated on the description of molecular geometry. Therefore, in the present study, we redetermined its structure at low temperature (110 K) focusing on both molecular structure ( Figure 1) and, especially, the crystal packing. The NMR spectral data of 9-bromo-ortho-carborane, as well as its crystal and gas phase structures, were recently reported by Hnyk et al. [49,55]. As for 9,12-dibromo-ortho-carborane, its preparation was also mentioned relatively recently [56]; however, only numerical characteristics of the NMR spectra were reported without their assignment. The main problem of the 9,12-dibromo-ortho-carborane synthesis is the purification of the target product. It was demonstrated that bromination of ortho-carborane, regardless of the Lewis acid and solvent used, gives, together with the desired 9-bromo-ortho-carborane, approx. 10 mol.% of 8-bromo-ortho-carborane. At the second stage, this leads to the crude product containing approx. 80% of 9,12-dibromo-ortho-carborane, together with significant amount of the 8,9-dibromo and traces of the 8,10-dibromo derivatives [57]. Impurities of 9-bromo-and 8,9,12-tribromo derivatives may also be present in the reaction mixture, which greatly complicates the purification of the target product [58]. Unfortunately, all our attempts to purify the target compound using chromatography methods failed. Therefore, we purified 9,12-dibromo-ortho-carborane by fraction crystallization from chloroform that produced a rather low (22%) yield of pure product (Scheme 1). It should be noted that the structure of 9,12-dibromo-ortho-carborane was determined in 1966 [50] at room temperature. The quality of that experiment was evidently low and was mostly concentrated on the description of molecular geometry. Therefore, in the present study, we redetermined its structure at low temperature (110 K) focusing on both molecular structure ( Figure 1) and, especially, the crystal packing. The presence of two bromine atoms might imply a formation of the Br . . . Br halogen bond in the crystal structure of 9,12-dibromo-ortho-carborane. At the same time, in our recent study [42] we showed that halogen substituent at the B9 and B12 positions of the ortho-carborane cage can act as a good donor of the lone pair (LP), however, its acceptor ability is low, and therefore, a formation of any strong halogen bond in the crystal is hardly expected. Moreover, in recently studied 1,12-Br 2 -ortho-C 2 B 10 H 10 , the C-H . . . Br interactions were found to be structure-forming while no halogen bonds were observed [49]. It means that it is difficult to predict a priori what type of intermolecular interactions will be predominant in the crystal structure stabilization of dihalogen carboranes. The X-ray study of 9,12-Br 2 -ortho-C 2 B 10 H 10 has revealed that both Br . . . Br halogen bond of type II and C-H . . . Br hydrogen bonds are formed in the crystal (Figure 2). The halogen bond is rather weak and strongly distorted (the Br(1) . . . Br (2) The presence of two bromine atoms might imply a formation of the Br…Br halogen bond in the crystal structure of 9,12-dibromo-ortho-carborane. At the same time, in our recent study [42] we showed that halogen substituent at the B9 and B12 positions of the ortho-carborane cage can act as a good donor of the lone pair (LP), however, its acceptor ability is low, and therefore, a formation of any strong halogen bond in the crystal is hardly expected. Moreover, in recently studied 1,12-Br2-ortho-C2B10H10, the C-H…Br interactions were found to be structure-forming while no halogen bonds were observed [49]. It means that it is difficult to predict a priori what type of intermolecular interactions will be predominant in the crystal structure stabilization of dihalogen carboranes. The X-ray study of 9,12-Br2-ortho-C2B10H10 has revealed that both Br…Br halogen bond of type II and C-H…Br hydrogen bonds are formed in the crystal (Figure 2). The halogen bond is rather weak and strongly distorted (the Br(1)…Br(2) distance is 3.796(2) Å, the B(9)-Br(1)…Br (2) and B(12)-Br(2)…Br(1) angles are 92.5(3)° and 148.4°, respectively); the Br(1) atom acts as LP donor while the Br(2) atom is LP acceptor. Each molecule has two halogen-bonded neighbors and four C-H…Br bonded ones which leads to a formation of layers parallel to the bc plane. In order to understand which interactions play a predominant role in the crystal structure formation, we carried out energetic analysis of the crystal packing by estimation of the dimeric interaction energies Each molecule has two halogen-bonded neighbors and four C-H . . . Br bonded ones which leads to a formation of layers parallel to the bc plane. In order to understand which interactions play a predominant role in the crystal structure formation, we carried out energetic analysis of the crystal packing by estimation of the dimeric interaction energies [42,[59][60][61]. Such dimers are formed by the central molecule and the molecule taken from the closest environment of the central molecule. Here, we considered only those molecular pairs which are linked by the C-H . . . Br and Br . . . Br interactions because all the other intermolecular interactions are of van der Waals type. Calculations were carried out with the GAUSSIAN program [62] using PBE0 functional and triple-zeta basis set which were found to be reliable for analysis of halogen and hydrogen bonds [63][64][65].
CH BH
As it is seen in Figure 2, the C-H . . . Br interactions are much stronger than Br . . . Br halogen bonds and can be viewed as structure-forming interactions in the crystal of 9,12-Br 2 -ortho-C 2 B 10 H 10 . The weakness of the observed halogen bond is also confirmed by near equivalence of the B(9)-Br(1) (1.955(5) Å) and B(9)-Br(2) (1.963(5) Å) bond lengths. In the case of a strong halogen bond, the latter must be significantly longer because the Br(2) atom acts as LP acceptor.
Materials and Methods
All reactions were carried out under argon atmosphere. Dichloromethane was dried using standard procedures [66]. The reaction progress was monitored by thin layer chromatography (Merck F254 silica gel on aluminum plates; n-hexane: chloroform 4: 1 (v/v)) and visualized using 0.5 % PdCl 2 in 1% HCl in aq. MeOH (1:10). The NMR spectra at 400 MHz ( 1 H), 128 MHz ( 11 B), and 100 MHz ( 13 C) were recorded with Varian Inova 400 spectrometer. The residual signal of the NMR solvent relative to Me 4 Si was taken as the internal reference for 1 H and 13 C NMR spectra. 11 B NMR spectra were referenced using BF 3 ·Et 2 O as external standard. Mass spectra (MS) were measured using Shimadzu LCMS-2020 instrument with DUIS ionization (ESI-Electrospray ionization and APCI-Atmospheric pressure chemical ionization). The measurements were performed in a negative ion mode with mass range from m/z 50 to m/z 2000. Isotope distribution was calculated using Isotope Distribution Calculator and Mass Spec Plotter [67].
Anhydrous AlCl 3 (0.80 g, 6.0 mmol) was added to solution of ortho-carborane (5.0 g, 34.7 mmol) in dichloromethane (200 mL) and stirred for 15 min. A solution of Br 2 (1.78 mL, 5.55 g, 34.7 mmol) in dichloromethane (50 mL) was added dropwise and the reaction mixture was stirred until it became colorless. Then, a solution of Br 2 (1.78 mL, 5.55 g, 34.7 mmol) in dichloromethane (50 mL) was added dropwise and the reaction mixture was heated under reflux for 16 h. The reaction mixture was cooled and treated with a solution of Na 2 S 2 O 3 (30.00 g) in water (100 mL). The organic phase was separated, the aqueous fraction was extracted with dichloromethane (3 × 50 mL). The organic fractions were combined, dried with anhydrous Na 2 SO 4 , filtered, and evaporated to dryness to give 9.75 g (93%) of crude product. Fraction crystallization from chloroform gave 2.30 g (22% yield) of pure of 9,12-Br 2 -ortho-C 2 B 10 H 10 as colorless crystals. 1 The single crystals of 9,12-Br 2 -ortho-C 2 B 10 H 10 were grown by slow evaporation of a solution of the title compound in chloroform at room temperature. Single crystal X-ray diffraction experiment was carried out using SMART APEX2 CCD diffractometer (λ(Mo-Kα) = 0.71073 Å, graphite monochromator, ω-scans) at 110 K. Collected data were processed by the SAINT and SADABS programs incorporated into the APEX2 program package [68]. The structure was solved by the direct methods and refined by the full-matrix least-squares procedure against F 2 in anisotropic approximation. The refinement was carried out with the SHELXTL program [69]. The CCDC number 2132434 contains the supplementary crystallographic data for this paper. These data can be obtained free of charge via www.ccdc.cam.ac.uk/data_request/cif (accessed on 15 February 2022).
|
2022-03-09T19:01:50.743Z
|
2022-03-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a1223e5060e9e70646d67d4b466ba5bfb95abc5b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-8599/2022/1/M1347/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "edfd715f42fcc6e29f234f084f2824b5a3195814",
"s2fieldsofstudy": [
"Chemistry",
"Geology"
],
"extfieldsofstudy": []
}
|
230732825
|
pes2o/s2orc
|
v3-fos-license
|
β2M shuttle hypothesis in the dialysis related amyloidosis (DRA)
As well approved in current experimental studies over 20years or more, the intermediate molecules, i.e., the conformational variants of globular native protein, had been confirmed in the transitional process of the in vitro amyloidogenesis [1]. The presence of this molecule in hemodialysis (HD) setting had firstly reported in amyloid tissue from a femoral bone cyst in patient with the DRA by Bellotti’s group [2]. Then, we had identified this intermediate β2microglobulin (I-β2M) using with capillary electrophoresis (C.E) in serum not only from HD patients but also the chronic kidney disease (CKD) patients and healthy persons, then, we proposed “β2M shuttle hypothesis “ as amyloidogenic concept in clinical setting of HD [3,4]. This concept is based upon 5 evidences as follows:
β2-microglobulin(β2m) is the precursor protein of the dialysis related amyloidosis (DRA) in long term hemodialysis patients.
As well approved in current experimental studies over 20years or more, the intermediate molecules, i.e., the conformational variants of globular native protein, had been confirmed in the transitional process of the in vitro amyloidogenesis [1]. The presence of this molecule in hemodialysis (HD) setting had firstly reported in amyloid tissue from a femoral bone cyst in patient with the DRA by Bellotti's group [2]. Then, we had identified this intermediate β2microglobulin (I-β2M) using with capillary electrophoresis (C.E) in serum not only from HD patients but also the chronic kidney disease (CKD) patients and healthy persons, then, we proposed "β2M shuttle hypothesis " as amyloidogenic concept in clinical setting of HD [3,4]. This concept is based upon 5 evidences as follows: Proportion of N-/I-β2M in serum vary roughly from 5 to 10, but no difference can be seen among patients with CKD, HD patients and healthy persons. However, HD provoke a drastic conversion from N-β2M to I-β2M and, consequently, post HD serum at 1 hour later ,which is supposed to contain at part the interstitial β2M, showed various C.E profile among individual cases but mostly showed profile with increased proportion of I-β2M accompanied by subpopulations of more cathodic β2M( I'-β2M) as shown in Figure 2 [4].
As for molecular structure, I-β2M consists of species with partially unfolded C-terminal, which can refold reversibly to N-β2M. Whereas, I'-β2M is supposed to consist of species with more, but not completely, unfolded C-terminal, which might be unlikely to refold to N-β2M.
2) The unfolding of the C-terminal; The conformational variant with the unfolded C-terminal from 92Ile to 99Met had been firstly demonstrated by Stoppini et al. and we proved to be present in amyloid tissue with monoclonal antibody in 2005 [5,6]. Few years later, we had also confirmed that the C-terminal of ⊿N6β2M was completely unfolded as same as 92/99β2M [7]. In addition, we had showed "smoking gun" evidence that heparin could provoke the C-terminal unfolding in native β2M at clinical doses in HD setting, demonstrating causative implication of interstitial GAG molecules in the C-terminal unfolding of β2M because heparin is one of main GAG molecules as matrix substance in the interstitial space [8]. 3) ⊿N6β2M; ⊿N6β2M is a fragmental variant lacking 6 N-terminal amino acids which had been proved to be highly amyloidogenic and, therefore, be useful as model molecule for A-β2M [9]. ⊿N6β2M had firstly reported to be found in amyloid tissue from patients with the carpal tunnel syndrome and considered to be a degradation product by protease from N-β2M or I-β2M in the amyloid tissue [9,10]. The amyloidogenicity of ⊿N6β2M was directly proved by us using with the aptamer specific for ⊿N6β2M [11].
4) An alibi of A-β2M in serum; Amyloid proteins is an ultimately unfolded conformer of physiological native proteins including β2M which is a pivotal component of MHC-I and the precursor protein of this amyloidosis. Thus far, any kind of amyloid protein have not been reported in serum in any kind of amyloidosis. Similarly, as for β2M, we had also denied a presence of both A-β2M and ⊿N6β2M in serum from HD patients with LC/MS analysis as shown in Figure 3 [12]. However, charge state ions showed interesting differences between standard β2M (Sigma) and ⊿N6β2M. Our study imply that amyloid protein must be formed in extravascular space and cannot transfer crossing vascular wall, and more importantly, cannot be cleared via the kidney or even dialysis. at the start and the end of HD(b), and at 1hr after HD(c). (★) a peak on refolding, (#) a peak on more unfolding. [4] 1700 1700 1700 100 Figure 3. The m/z spectra of β2M, purified human urine β2M(Sigma) is centered at z=+7(a), ΔN6β2M is centered at z=+10 and uremic serum is centered at z=+8(c). crossing over the vascular wall and the C-terminal of I-β2M in serum might be partially, not completely, unfolded. Whereas, in the interstitial space, there co-exist 3 kinds of β2M species, i.e., N-β2M, I-β2M and β2M92-99 with completely unfolded C-terminal, which might hard to refold into N-β2M. First, HD procedure, by itself, give rise to a conversion from N-β2M to I-β2M, both of which, then, undergo more unfolding at the C-terminal in the extravascular space, i. e, space rich of GAG molecules with SO3 moiety. Next, some I-β2M species interact with GAG to convert to β2M92-99, which can give rise to polymer and some I-β2M return into the vascular space and refold again to N-β2M [4].
In conclusion, β2M in serum exist ubiquitously at dynamic equilibrium between N-β2M and I-β2M with overwhelming predominance of N-β2M over Iβ2M. We believe that the presence of I-β2M with the unfolded C-terminal is "sine qua non" in development of the DRA, because a C-terminal unfolding could be also confirmed in the natural A-β2M, i.e, D76Nβ2M [8,13]. HD treatment provoke inevitably a drastic shift from N-β2M to I-β2M inside vascular wall and more unfolding at the C-terminal simultaneously outside of vascular wall. In addition, along with years of HD, the C-terminal of I-β2M transferred into the interstitial space have been becoming more unfolded and resulted in accumulation of β2M92-99, which lead to straightly a development of the DRA in the matrix space.
|
2020-06-04T09:10:24.864Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ee049869fa7de150a591cfe01ccfc637533cf31c",
"oa_license": "CCBY",
"oa_url": "https://www.oatext.com/pdf/NRD-4-157.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2f2c9469e9084ae0fe7bdac5b2bb05595cfc52d9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235549259
|
pes2o/s2orc
|
v3-fos-license
|
Death by political party: The relationship between COVID‐19 deaths and political party affiliation in the United States
Abstract This study explored social factors that are associated with the US deaths caused by COVID‐19 after the declaration of economic reopening on May 1, 2020 by President Donald Trump. We seek to understand how county‐level support for Trump interacted with social distancing policies to impact COVID‐19 death rates. Overall, controlling for several potential confounders, counties with higher levels of Trump support do not necessarily experience greater mortality rates due to COVID‐19. The predicted weekly death counts per county tended to increase over time with the implementation of several key health policies. However, the difference in COVID‐19 outcomes between counties with low and high levels of Trump support grew after several weeks of the policy implementation as counties with higher levels of Trump support suffered relatively higher death rates. Counties with higher levels of Trump support exhibited lower percentages of mobile staying at home and higher percentages of people working part time or full time than otherwise comparable counties with lower levels of Trump support. The relative negative performance of Trump‐supporting counties is robust after controlling for these measures of policy compliance. Counties with high percentages of older (aged 65 and above) persons tended to have greater death rates, as did more populous counties in general. This study indicates that policymakers should consider the risks inherent in controlling public health crises due to divisions in political ideology and confirms that vulnerable communities are at particularly high risk in public health crises.
INTRODUCTION
The 2019 coronavirus (severe acute respiratory syndrome coronavirus 2 or SARS-CoV-2) is a contagious virus associated with respiratory illness and severe pneumonia and is commonly called COVID-19. According to data from the World Health Organization, as of January 18, 2021, the COVID-19 pandemic had resulted in 93,611,355 confirmed cases and 2,022,405 confirmed deaths globally. 1 The virus first emerged in China before spreading to South Korea, Italy, and some European countries that experienced outbreaks in early 2020. On January 19, 2020, the first known COVID-19 case in the United States was that of a 35-year-old man who went to an urgent care clinic in Snohomish County, Washington, with cough and fever (Holshue et al., 2020). By March 29th, 2021, the United States had recorded 29,921,599 confirmed cases and 543,870 confirmed deaths due to COVID-19. Scholars proposed using social and behavioral science to support the pandemic response . For instance, Mayer (2020) criticizes President Trump's strategies for managing the health crisis, including his downplaying of the seriousness of the disease early in the pandemic. Mayer considers this mismanagement to be among the worst crisis responses in American history. This study responds by analyzing the roles of several social factors, including political polarization, in mitigating this pandemic in the United States.
Combating the coronavirus pandemic has burdened US economy. On March 27, 2020, President Trump signed an economic relief package of over $2 trillion, the Coronavirus Aid, Relief, and Economic Security (CARES) ACT. 2 Furthermore, starting in April 2020, several political leaders proposed relaxing previously imposed public health measures to relieve the burden on the economy, 3 while health professionals continued to warn against "reopening" the economy. 4 This sent a mixed signal to state and local governments, as well as to the public, about how they should react to the pandemic.
We hypothesize that the mixed messages contributed, in varying degrees, to the public's adherence to public health measures and that this resulted in differential COVID-19 casualty DEATH BY POLITICAL PARTY | 225 rates. Specifically, we seek to understand how social factors, including the distribution of political affiliations, social distancing activities, 5 and the duration of implemented public health policies influenced the number of deaths associated with COVID-19 at the county level in the United States. We find evidence that political ideology interacts with public health policies in such a way that counties with higher levels of Trump support suffer worse COVID-19 outcomes than comparable counties with lower levels of Trump support. In other words, public health policies appear to be less effective in certain counties than others as a result of locally dominant political ideologies. Furthermore, we present evidence that this may be due to poor compliance with public health policies in those particular counties.
BACKGROUND
COVID-19 patients generally present with fever and cough (Carlos et al., 2020;N. Chen, Zhou, et al., 2020;Chung et al., 2020;Shi et al., 2020;Song et al., 2020; and, in the early stages of the pandemic, were often diagnosed by computerized tomography (CT) scan and by analysis of their travel histories (Chung et al., 2020;Fang et al., 2020;Kim et al., 2020;Wilson & Chen, 2020). Specialized tests for detecting the virus were developed within several months. The estimated incubation period for COVID-19 ranges from 2.1 to 11.1 days with a mean of 6.4 days. On January 23, 2020, the World Health Organization (WHO) reported 581 confirmed cases and only 10 cases outside of China (World Health Organization, 2020). However, COVID-19 has high transmissibility and was, therefore, able to quickly spread globally despite travel precautions (Riou & Althaus, 2020).
With respect to the treatment of COVID-19, there are some drug treatment options and suggestions from doctors (Z.-M. Chen, Fu, et al., 2020;Jin et al., 2020;Lu, 2020). However, these treatments exhibit limited efficacy among high-risk groups. Therefore, prior to the development and widespread distribution of vaccines in 2021, policymakers and healthcare practitioners emphasized policies aimed at slowing the spread of COVID-19 due to the limited treatment options, the seriousness of symptoms, and the high transmissibility rate. Quarantine is a common method that governments have adopted worldwide (Carlos et al., 2020). Scholars suggest that cultural tightness and government efficiency play significant roles in controlling health crises (Gelfand et al., 2020). For instance, China adopted the most extensive quarantine in recent history to combat COVID-19. In some communities (Yiyang county, Luoyang City, Henan Province), only one person from a family was allowed to go out every day, with their temperature being taken before doing so. Additionally, grocery stores would test patrons' temperatures before admittance. Temperatures of all family members were reported to their local communities daily at the peak of the pandemic. However, some countries adopted quite different policies with respect to controlling the pandemic. In the United States, a policy of social distancing depended heavily on each individual's self-precautions and was largely unenforced by the government. The reliance on self-enforcement of preventative measures common in the United States means political ideology could play a role in the adoption of public health policies and recommendations.
Political affiliation
Political ideology plays an important role in how individuals form attitudes (Van Holm et al., 2020;Zaller, 1992) and process information (Lodge & Taber, 2013). Political ideology may even influence individuals' health behaviors. For example, Republicans have been found to be less likely to get the H1N1 vaccine in comparison with Democrats (Mesch & Schwirian, 2015). Survey research also finds that Democrats are more likely to adopt several health-protective behaviors, more likely to worry, and more likely to support social distancing policies (Kushner Gadarian et al., 2021). Republicans appear to be less concerned about COVID-19, practice social distancing less, follow the social distancing orders after the state-wide policy enactment less and are less likely to shift their consumption toward e-commerce (Allcott et al., 2020;Gadarian et al., 2020;Gollwitzer et al., 2020;Painter & Qiu, 2020). Democrats, on the other hand, are more likely to exercise protective actions against COVID-19 like taking fewer trips, staying home more, maintaining safe distances, and touching their own faces less frequently (Van Holm et al., 2020). Governors' recommendations for residents to stay home did significantly more to reduce mobility in Democratic-leaning counties (Grossman et al., 2020).
As a polarizing Republican president, Donald Trump provides a benchmark for policy preference among Republicans but not Democrats, which may lead to differences in responding to policies and consequently may influence the spread of the COVID-19. The president publicly disagreed with health experts about what policies should be applied to manage COVID-19. 6 On March 23, Trump claimed that America would reopen the economy against the warnings of health experts. 7 By April 16, President Trump issued guidelines to enable states to reopen; governors could open their economies at either the state level or county-by-county. 8 Republican governors and governors from states with more Trump supporters were slower to adopt social distancing policies (Adolph et al., 2020). Political affiliation may have played a role in people's pandemic behaviors and consequently influenced subsequent death rates. In September 2020, President Trump even publicly admitted that he downplayed COVID-19 at the initial stages to reduce the panic. 9 Relatedly, Painter and Qiu (2020) found that Republicans were more likely to assign credibility to the advice of Trump in comparison to other state officials. Trump voters search less for information on COVID-19 and engage in less social distancing behavior (SDB) (Barrios & Hochberg, 2020). Counties that voted for Trump in the 2016 election exhibited 16% less physical distancing than counties that voted for Hillary Clinton and pro-Trump voting has been found to be indirectly associated with a higher growth rate in COVID-19 infections and fatalities (Gollwitzer et al., 2020). Due to the expected differential adherence to public health protocols, we hypothesize that the dominant political affiliation in a county will predict higher or lower COVID-19 death rates: Hypothesis H 1 (Political Affiliation): Counties with higher levels of Trump support will experience greater weekly COVID-19 death rates.
Policy duration
In the United States, a variety of policies were implemented at the state level or the county level including shelter-in-place orders (SIPOs), 10 closures of restaurants/bars/ entertainment-related businesses, bans on large events, and closures of public schools. The effectiveness of these policies varied widely; SIPOs and closures of nonessential businesses worked toward curtailing COVID-19 while the prohibition of large events and closure of public schools did not show signs of slowing down COVID-19 (C. Courtemanche et al., 2020; C. J. Dave et al., 2020Dave et al., , 2021. Statewide SIPOs had the strongest effect, accounting for a 37% decrease in confirmed cases 15 days after implementation (Abouk & Heydari, 2020). Additionally, the impact of a social distancing policy has a significant cumulative effect (Dave et al., 2021). For instance, the daily growth rates were reduced by 5.4 percentage points after 1-5 days of government-imposed social distancing measures and 9.1 percentage points after 16-20 days (C. . We therefore expect that counties with long-lasting social distancing policies will experience relatively lower coronavirus death rates. Hypothesis H 2 (Policy Duration): The longer certain COVID-19 policies were in effect in a county, the fewer COVID-19 deaths the county will experience per week. Hypothesis H 2a The longer the implementation of a SIPO, the fewer deaths per week a county will experience. Hypothesis H 2b The longer the implementation of a public-school closure, the fewer deaths per week a county will experience. Hypothesis H 2c The longer the implementation of a dine-in restaurant closure, the fewer deaths per week a county will experience. Hypothesis H 2d The longer the implementation of an entertainment facility and gym closure, the fewer deaths per week a county will experience.
Additionally, political ideology may moderate the effect of policy duration on death count per county. Therefore, we hypothesize that there is an interaction effect between political ideology and policy duration on the deaths caused by COVID-19: Hypothesis H 2e The proportion of Trump supporters per county will mitigate the effect of policy duration on suppressing COVID-19 deaths.
Put another way, as the duration of a health policy in a county increases, the number of deaths per county will increase more rapidly in the counties with higher levels of Trump support than in counties with lower levels of Trump support. Tang et al. (2020) found that the best method to stop the spread of the COVID-19 is persistent and strict self-isolation. However, not all individuals are able to fully self-isolate, particularly for those in certain jobs. To account for this, we measured three working types during the pandemic: staying at home completely, working outside the home part time, and working outside the home full time. Working from home corresponds to strict adherence to self-isolation while working outside the home part time corresponds to a moderate level of self-isolation and working outside the home full time corresponds to nonadherence to social distancing.
SDB: Working mode
Hypothesis H 3a (Working modes): Counties with more people working from home tend to have fewer weekly COVID-19 deaths. Hypothesis H 3b (Working modes): Counties with more people working part-time from home tend to have fewer weekly COVID-19 deaths. Hypothesis H 3c (Working modes): Counties with more people working full time tend to have more weekly COVID-19 deaths.
Control variables
Population density has been shown to play an important role in understanding influenza mortality. In denser areas, the mortality rate has been found to be significantly higher in comparison to less dense areas (Chandra et al., 2013). Related to COVID-19, rural counties with low population density appear to have gained very little from social distancing policies, especially statewide orders, which suggests that more nuanced policies that account for the heterogeneity of counties are needed to defeat the pandemic (Dave et al., 2020). The risk of death among COVID-19 infected individuals is between 0.3% and 0.6% (Nishiura et al., 2020). According to scientists (D. Wang, Hu, et al., 2020;, older individuals with COVID-19 have a higher mortality rate than do other age groups. We therefore control for the size of the population 65 years of age or older within a county. We use 65 as a cutoff because people aged 65 and above qualify for Medicare; other age groups do not. Low income exacerbates the risk of death due to higher proportions of certain health issues, such as smoking (Krueger & Chang, 2008), heart disease (Lotufo et al., 2013;Redmond et al., 2013), and cancer (Najem et al., 1985;Singh & Jemal, 2017;Tolkkinen et al., 2018) among low-income populations. Furthermore, poverty may exacerbate negative pandemic outcomes as low-income individuals have diminished access to high-quality health care. 11 Additionally, scholars suggest that people of color in America potentially suffer more from this pandemic because of their pre-existing disadvantages in health, social, and economic status (Cooper & Williams, 2020). 12 Because population size, age, income, and race are likely correlated with local pandemic outcomes, and these variables are likely correlated with the levels of Trump support per county, we control for all four.
Data
There are a variety of data sources available for COVID-19 including those provided by WHO, CDC, and Johns Hopkins University. Here, the count of COVID-19 deaths per county is provided by Johns Hopkins University's CSSE COVID-19 Tracking Project 13 and Dashboard. 14 As for the county political affiliation information, this paper uses data on the 2016 US Presidential Election from the MIT Election Data Science Lab (Data & Lab, 2018). 15 Population, race, and income data are obtained from the U.S. Census. 16 To measure SDB, we use the Social Distancing Metrics 17 data provided by SAFEGRAPH, which includes information about people's working modes based on mobile device telemetry.
Measures
To measure aggregate political preferences at county level, we compute the level of Trump support per county from the 2016 presidential campaign as the number of total votes for Trump divided by the total number of votes per county. We base this calculation on the assumption that the vast majority of the votes in any given county were for candidates in the two major parties; we essentially ignore the influence of all third party candidates. We also assume that Trump support did not change substantially between 2016 and 2020. In measuring the duration of health policies, we count the length in days since a policy's first implementation; policies of interests include the closing of public schools, the closing of restaurants, the closing of entertainment facilities and gyms, and SIPOs. 18 In terms of the social distancing activities, we measure the proportion of people who stayed at home completely, the proportion of people who worked part time, and the proportion of people who worked full time relative to the overall county population. These represent the three types of working routines. In SAFEGRAPH, home is defined as the "common nighttime location for the device over a 6-week period where nighttime is 6 pm-7 am," and the device count is measured by the "number of devices seen in our panel during the date range whose home is in this census block group." The data do not include DEATH BY POLITICAL PARTY | 229 "any census block groups where the count <5." 19 Descriptive statistics are provided in Table 1.
Data analysis
This study uses a zero-inflated negative binomial model. The time frame of this study is from April 6 to May 25. We also present supplementary pooled ordinary least square, random effects, and fixed effects models in Table A1. 20 The dependent variable is the death count per week per county. Since May 1 was the day that many states chose to reopen their economies, we focus on SDBs from April 6 to May 11 as key independent variable. For instance, the SDB in the first week will be represented by the aggregate device movement (i.e., working type) on April 6. The lengths of policies are also calculated from April 6. The dependent variable is the count of virus-related deaths lagged by 2 weeks. The first model mainly examines the relationships between SDBs, aggregate political preference, and the number of deaths 2 weeks later. The model formula for the zero-inflation component (omitting the log link function) is given by We run the above-specified model four times, once each for all possible interactions between Trump support rate and the four selected policies. We focus primarily on the above model (shown in Table 2 as Model 3) and provide the others in Appendix A. Figure 1 shows that the total number of devices detected by SAFEGRAPH from counties with low Trump supporter levels (≤0.25) is about 0.9 million more than the total number of devices from high Trump supporter level (≥0.75) counties. This gap narrows at the beginning of April. Figure 2 shows that the total number of devices staying at home from the low Trump support level counties is about 0.3 million more than the high Trump support level counties. However, this gap increases until April 1st by which point the low Trump support level counties have 0.55 million more devices staying at home than the high Trump support level counties. This trend suggests that social distancing policies are adhered to more effectively in Democratic counties than in Republican counties. Figures 3 and 4 show that the total number of devices out of the home part time and full time on February 1 from low Trump support level counties is 80,000 more than from high Trump support level counties. However, these gaps decrease until mid-April. By mid-March, the number of devices belonging to persons working outside the home part time is greater in high Trump support level counties and the gap for full-time work outside the home work has narrowed to just 10,000 devices. Figures 1 through 4 suggest aggregate differences in how individuals in high Trump support level counties and low Trump support level counties responded to the pandemic between February and May. In particular, they point to decreases in the number of devices associated with outside-the-home working styles in low Trump support level counties relative to high Trump support level counties. With this in mind, we turn now to the results of our regression analysis that will allow us to isolate the relationship between political ideology and county-level COVID-19 outcomes.
RESULTS
We focus our attention on the fully specified Model 3 in Table 2. While the coefficient for the level of Trump support is positive, it is not significant; we find no evidence for a relationship between supporter rate and county-level COVID-19 death rates (H 1 ) after controlling for demographics, policy implementation, and working mode. However, the interaction effect between the level of Trump support per county and the duration of implementation of a SIPO is positive and statistically significant. Figure Table 2 (zero-inflated negative binomial). Testing Hypothesis H 1 and H 2 . Shaded region = 95% confidence interval and low Trump support counties. As the durations of SIPOs increase, the predicted COVID-19 death counts in all hypothetical counties also increase; however, the differences between these three groups become very pronounced after 3 weeks of a SIPO. In other words, even controlling for observed compliance via SDB data, we find that SIPOs are nonetheless apparently less effective in counties with high levels of Trump support. In particular, for a hypothetical county with zero Trump supporters, the coefficient associated with SIPOs is very near zero (0.015), controlling for compliance. For a hypothetical county that is 100% Trump supporters, the coefficient for SIPOs is just above 0.04. Additionally, the interaction effects between the level of Trump support and two other policies (the prohibition on restaurant dine-in and the closing of entertainment facilities and gyms) exhibit similar trends, as shown in Figures 7 and 8. 21 Figure 8 illustrates the differential relationship between restaurant dine-in prohibitions and predicted COVID-19 deaths for counties of differing aggregate political ideologies. For counties with very low levels of Trump support, restaurant policies resulted in decreases in the average death count over time while the opposite is true for counties with high levels of Trump support.
The only policy that does not follow the patterns as described above is public school closures ( Figure 6). This is also the only policy for which the associated model coefficient is negative and significant. We suspect that the insignificant interaction effect here may be due to mandatory enforcement of this policy by state and local governments; while the other policies require individuals or small business owners to comply to assure efficacy, it is difficult to imagine how individuals would fail to comply with public school closures. These findings are generally in agreement with our expectations as outlined in H 2e : Trump supporter level mitigates the effectiveness of public health policies. Why these policies generally appear to be less effective in Trump-supporting counties is worth closer attention.
The inclusion of SAFEGRAPH data on working modes (home, part time, and fully outside the home) should at least partially control for noncompliance with these policies. Nonetheless, we find that Trump-supporting counties fare worse than their non-Trump-supporting counterparts over the course of public health policy implementation. We suspect this is due to forms of noncompliance that are not fully captured by the working mode covariates; these may include improper mask usage or failure to social distance in nonprofessional settings (e.g., parties or social gatherings).
We find little support for H 2a through H 2d : duration of school closure is the only policy that is associated with a statistically significant decrease in COVID-19 deaths. However, we caution against interpreting this finding directly: policy implementation is likely a function of both the current coronavirus case count in a county as well as a county's overall risk. Therefore, positive coefficients on policies (such as that associated with the closure of gyms and entertainment venues) may be due to the late implementation of those policies after increases in coronavirus cases had already become near-unavoidable. Furthermore, the counterfactual number of cases in counties without those policies is not clear.
Similarly, we fail to reject the null hypotheses for H 3a through H 3c , our working mode hypotheses. In fact, Model 3 indicates that the proportion of devices (relative to population) staying completely at home is associated with an increase in the predicted F I G U R E 8 Interaction effect of Trump support level and the duration of restaurant dine-in closure policy number of COVID-19 deaths and that the reverse is true for the proportion of devices working outside the home full time. As with the findings for H 2 , we suspect this may be due to reverse causality: compliance is higher in areas with greater coronavirus risk ( Figures 5-8). Table 3 shows that high Trump support counties have, on average, significantly more people working full time or part time outside-the-home and fewer people staying at home than comparable low Trump support counties. We demonstrate this in a series of four linear models of working mode (represented as a proportion of the total population) regressed on predictors of working mode including the level of Trump support. Models 2 through 4 in Table 3 show that level of Trump support correlates with working mode behaviors that are contrary to public health guidance, even when controlling for the duration for which that guidance has been in place. This indicates that individuals in counties with high levels of Trump support show less compliance with these health policies. This finding reinforces our suspicion that the positive interaction effects found between policy implementation duration and level of Trump support are likely the result of poor compliance with public health guidance.
T A B L E 3 Social distancing behaviors, political ideology, and health policies Finally, we note that the percentage of people of color (not including Asian persons) per county is positively associated with the number of COVID-19 deaths per county. This provides evidence that communities of color suffer more from COVID-19 than do communities with fewer people of color.
DISCUSSION AND POLICY IMPLICATIONS
We find that political ideology plays a role in health outcomes during major public health crises. By interacting ideology with measures of public health policy, we demonstrate that disparate public health outcomes during the coronavirus pandemic are likely due in part to differences across ideological lines in the practical implementation of public health policies. After controlling for a number of determinants of COVID-19 death counts, we find that ideology, operationalized as county-level Trump support, is not predictive of increased COVID-19 mortality on its own. However, predicted rates of COVID-19-related deaths in counties with high levels Trump support increase along with the duration of implementation of several COVID-19 policies (restaurant closures, gym and entertainment facility closures, and SIPOs). We hope these findings encourage policymakers and opinion leaders to consider the risks associated with mixed messaging during future health crises. Encouraging noncompliance with public health directives along ideological lines leads to suboptimal public health outcomes and, in the case of the coronavirus pandemic, unnecessarily high death rates. Policymakers should balance the cost of sacrificing individual freedoms against the grave health outcomes suffered disproportionately by vulnerable groups.
However, we also urge caution when interpreting our findings in the context of policy recommendations. Our study covered only a small time period (April 6 through May 25, 2020) and our conclusions may not generalize well beyond this period. As researchers learned more about the virus and the public increasingly saw its effects first-hand, both public health guidance and compliance may have adjusted accordingly.
Concerning SDB, we find that the number of people who work part time or full time outside the home is positively associated with the level of Trump support at the county level. Additionally, the number of people who work from home is negatively associated with the level of Trump support. This suggests that mixed health signals from experts and politicians may influence individuals' compliance with public health directives, even during major crises. Mixed signals from politicians may potentially cause people to underestimate the seriousness of a health crisis.
CONFLICTS OF INTEREST
The authors declare that there are no conflicts of interest. This research was not supported by any grants or outside sources. No patient data or personally identifiable information were used in this study.
ETHICS STATEMENT
All data and code needed to reproduce the study will be made publicly available on the author's website at the time of publication. Note: Robust standard errors are given in parentheses. ***p < 0.001; **p < 0.01; *p < 0.05. a The Hausman test shows a significant difference (p < 0.001) between the coefficient for the fixed effects and the random effects model, so this study uses fixed effects for time-variant variables. However, the random effects model has multiple advantages, such as incorporating time-invariant variables (Bell & Jones, 2015), so we add a random effects model as a reference for explaining timeinvariant variables' effects on death. Observations 10,012 10,012 10,012 10,012 10,012 10,012
|
2021-05-08T13:12:15.219Z
|
2021-05-05T00:00:00.000
|
{
"year": 2021,
"sha1": "076968571e38f448ba73d65342303f205bf78b0a",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/wmh3.435",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "01979ffbfb92eebcab0dd8b4cd259f3608340387",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
}
|
13857964
|
pes2o/s2orc
|
v3-fos-license
|
Imprint of topological degeneracy in quasi-one-dimensional fractional quantum Hall states
We consider an annular superconductor-insulator-superconductor Josephson-junction, with the insulator being a double layer of electron and holes at Abelian fractional quantum Hall states of identical fillings. When the two superconductors gap out the edge modes, the system has a topological ground state degeneracy in the thermodynamic limit akin to the fractional quantum Hall degeneracy on a torus. In the quasi-one-dimensional limit, where the width of the insulator becomes small, the ground state energies are split. We discuss several implications of the topological degeneracy that survive the crossover to the quasi-one-dimensional limit. In particular, the Josephson effect shows a $2\pi d$-periodicity, where $d$ is the ground state degeneracy in the 2 dimensional limit. We find that at special values of the relative phase between the two superconductors there are protected crossing points in which the degeneracy is not completely lifted. These features occur also if the insulator is a time-reversal-invariant fractional topological insulator. We describe the latter using a construction based on coupled wires. Furthermore, when the superconductors are replaced by systems with an appropriate magnetic order that gap the edges via a spin-flipping backscattering, the Josephson effect is replaced by a spin Josephson effect.
I. INTRODUCTION
One of the hallmarks of the fractional quantum Hall effect (FQHE) is that if the two-dimensional electron system resides on a manifold with a nontrivial topology, it will have a ground state degeneracy which depends on the topology [1]. For a fractional quantum Hall state on an infinite torus, the degeneracy of the ground state equals the number of topologically distinct fractionalized quasiparticles allowed in that state. Since this degeneracy is topological, it does not originate from any symmetry, and in particular does not require the absence of disorder. Furthermore, no local measurement may distinguish between the degenerate ground states.
When the torus is of large but finite size, the degeneracy is split, but the splitting is exponentially small in L, where L = min {L x , L y } and L x , L y are the two circumferences of the torus. In the thin torus regime, where one circumference of the torus is infinite and the other is smaller or comparable to the magnetic length, the fractional quantum Hall state crosses over into a charge density wave (CDW), and the degenerate ground states correspond to different possible phases of the CDW [2][3][4]. In that regime a local impurity may pin the charge density wave and lift the degeneracy between the ground states. Equivalently, a local measurement is able to identify the phase of the CDW, and hence the ground state.
In this work we consider two systems that are topologically equivalent to a torus, and -unlike the torusare within experimental reach. The first is that of an annular shaped electron-hole double-layer in which the electron and hole densities are equal, and are both tuned to the same FQHE state (see Fig. (1a)). In the absence of any coupling between the layers, both the interior edge and the exterior edge of the annulus carry pairs of counter-propagating edge modes of the electrons and the holes. These pairs may be gapped by means of interlayer back-scattering, resulting in a fully gapped system with the effective topology of the torus. In fact, this system is richer than a seamless torus, since the interior and exterior edges may be gapped in different ways. In particular, gapping the counter-propagating edge modes by coupling them to a superconductor may have interesting consequences. Some of these consequences are central to the current paper.
The second realization we consider is that of a two dimensional time-reversal-invariant fractional topological insulator [5]. To be concrete, we assume that it is constructed of wires subjected to spin-orbit coupling and electron-electron interaction (see Fig. (1b)). In this realization, electrons of spin-up form a FQHE state of filling factor ν, and electrons of spin-down form a FQHE of filling factor −ν. Similar to the particle-hole case, the edges carry pairs of counter-propagating edge modes with opposite spins that may be gapped in different ways. Remarkably, when the edge modes are gapped by being coupled to superconductors, the system is invariant under time-reversal, yet topologically equivalent to a FQHE torus.
We use these realizations of a toroidal geometry and their inter-relations to investigate the transition of a frac- 1. (a) The first realization we consider is that of an electron annulus (blue) and a hole annulus (red) under the action of a uniform magnetic field. It is evident that coupling the annuli's edges forms the topology of a torus. The second realization we suggest is that of a fractional topological insulator. Fig. (b) shows a possible model for a fractional topological insulator.
We have an array of N wires, with a strong spin-orbit coupling. The spin orbit coupling is linear with the wire index n. The similarity of the resulting spectrum (see Fig. (3a) below) to the one corresponding to the wires construction of quantum Hall states suggests an equivalence to two quantum Hall annuli subjected to opposite magnetic fields (each annulus corresponds to a specific spin). The use of the wires construction enables us to include interaction effects using a bosonized Tomonaga-Luttinger liquid theory for the description of the wires. (c) The edge modes of the two above models can be gapped out by proximity coupling to superconductors. In the case of a thin (quasi-1D) system, the phase difference between the inner and the outer superconductors leads to a Josephson effect mediated by tunneling across the region of a fractional quantum Hall double layer or a fractional topological insulator. The spectrum as a function of the phase difference ϕ is depicted in Fig. (2) below. The edge modes can also be gapped using proximity to magnets, in which case one can measure the spin-Josephson effect. tional quantum Hall system from the thermodynamic two-dimensional to the quasi-one dimensional regime of a few wires. In particular, we find signatures of the topological ground state degeneracy of the two-dimensional (2D) limit (akin to that of fractional quantum Hall states on a torus) that survive the transition to the quasi onedimensional (1D) regime and propose experiments in which these signatures may be probed. For example, for an Abelian fractional quantum Hall state, we find a 2πdperiodic Josephson effect, where d is the degeneracy in the 2D thermodynamic limit. The structure of the paper is as follows: in Sec. II we summarize the physical ideas and the main results of the paper. In Sec. III we define the systems in more detail and identify the topological degeneracy in the thermodynamic limit. In Sec. IV, we discuss the quasi onedimensional regime, and point out observable signatures of the topological degeneracy in that regime. Our discussions in these sections focus on the ν = 1/3 case. In Sec. V we discuss how the results of the previous sections are generalized to other Abelian QHE states.
II. THE MAIN RESULTS AND THE PHYSICAL PICTURE
A. The systems considered The electron-hole double-layer system is conceptually simple to visualize (see Fig. (1a)). We consider an electron-hole double-layer shaped as an annulus with equal densities of electrons and holes, and a magnetic field that forms FQHE states of ±ν in the two layers. The system breaks time reversal symmetry, but its low energy physics satisfies a particle-hole symmetry. For most of our discussion we focus on the case ν = 1/3. In that case each edge carries a pair of counter-propagating ν = 1/3 edge modes. The edge modes may be gapped by means of normal back-scattering (possibly involving spin-flip, induced by a magnet) or by means of coupling to a superconductor. In line with common notation, we refer to these two ways as F and S respectively.
To model the fractional topological insulator we consider an array of N coupled quantum wires of length L x , each satisfying periodic boundary conditions (Fig. (1b)). The wires are subjected to a Rashba spin-orbit coupling, and we consider a case in which the spin-orbit coupling constant in the n'th wire is proportional to 2n − 1 (similar to the model considered by Ref. [6]). Effectively, this form of spin-orbit coupling subjects electrons of opposite spins to opposite magnetic fields. While this particular coupled-wire model of a time reversal invariant topological insulator does not naturally allow for the regime of a large N , other realizations, such as those proposed in Ref. [6,7], allow for such a regime. These realizations require more wires in a unit cell, and are therefore more complicated that the one considered here. Most of the results of our analysis are independent of the specific realization of the fractional topological insulator, and we present the analysis for the realization that is simplest to consider.
For non-interacting electrons, the spectrum of the array we consider takes the form shown in Fig. (3a). Single-electron tunneling processes (which conserve spin) gap out the spectrum in all but the first and last wires, which carry helical modes (Fig. (3b)). If the chemical potential is tuned to this gap, then in the limit of large N the system is a topological insulator (TI), and therefore the gapless edge modes are protected by time-reversal symmetry and charge conservation [8]. This construction is then equivalent to two electron QH annuli with opposite magnetic fields.
The edge modes may be gapped by coupling the two external wires (n = 1 and n = N ) to a superconductor or to a system with appropriate magnetic order. A Zeeman field that is not collinear with the spin-orbit coupling direction is necessary to couple the different spin directions. Moreover, in our coupled-wires model the spin-up and the spin-down electrons at the n = N edge have different Fermi-momenta, so that edge would not be gapped by a simple ferromagnet. In order to conserve momentum one would need to introduce a periodic potential that could modulate the coupling to the ferromagnet at the appropriate wave vector, or one would need to use a spiral magnet with the appropriate pitch. In more sophisticated wire models, such as those discussed by Refs. [6,7], or in actual realizations of topological insulators, the two edge modes can have the same Fermi momenta, so a simple ferromagnet can be used.
In order to construct a fractional topological insulator, we first tune the chemical potential such that the density is reduced by a factor of three, to ν = 1/3. For an array of wires in a magnetic field and spinless electrons, Kane et al. [9] have introduced an interaction that leads to a ground state of a FQHE ν = 1/3. Furthermore, they argued that there is a range of interactions that will flow to the topological phase described by this state [9][10][11]. Here we show that the same interaction, if operative between electrons of the same spin only, leads to a formation of a fractional topological insulator, i.e., to the spin-up electrons forming a ν = 1/3 state and the spindown electrons forming a ν = −1/3 state. Note that the same type of interaction terms were used by several authors to construct various 2D fractional topological states [6,7,12], and 1D fractional states [11,[13][14][15].
Our analysis is based on bosonization of the wires' degrees of freedom, and a transformation to a set of composite chiral fields, that may be interpreted as describing fermions at filling ν = 1. In terms of the composite fields, one can repeat the process which led to a gapping of the non-interacting case either by normal or by superconducting mechanisms. In terms of the original electrons, these mechanisms involve multi-electron processes, which either conserve the number of electrons or change it by a Cooper pair.
Both the electron-hole double-layer and spin-orbit wire system have counter propagating edge modes. They are distinct, however, in a few technical details. An electron-hole double layer system has been realized before in several materials, such as GaAs quantum wells and graphene. The requirements we have here -no bulk tunneling, sample quality that is sufficient for the observation of the fractional quantum Hall effect, and a good coupling to a superconductor or a magnet -are not easy to realize, but are not far from experimental reach [16][17][18]. In addition, we assume that the two layers are far enough such that inter-layer interactions do not play an important role, but close compared to the superconducting coherence length to enable pairing on the edges.
The array of wires we describe can in principle be formed using semi-conducting wires such as InAs and InSb [19][20][21], where variable Rashba spin-orbit coupling could be achieved by applying different voltages to gates above the wires. We stress that the wires construction is nothing but a specific example of a fractional topological insulator, and that any fractional topological insulator is expected to present the effects we discuss. Two-dimensional topological insulators were conclusively observed [22][23][24][25][26][27][28], and more recently proximity effects to a superconductor were demonstrated on their edges [29][30][31]. However, fractionalization effects due to strong electron-electron interaction were not observed yet in these systems and are less founded theoretically.
B. Ground state degeneracy and its fate in the transition to one dimension
In Sec. III we investigate the topological degeneracy of the ground state in the 2D thermodynamic limit. Using general arguments, we find that the degeneracy depends on the gapping mechanism of the edges: when both edges are gapped by the same mechanism, be it proximity coupling to a superconductor or to a magnet, the topological degeneracy is three, as expected. However, if one edge is gapped using a superconductor and the other is gapped using a magnet the ground state of the system is not degenerate.
Physically, the degeneracy is most simply understood in terms of the charge on the edge modes. For an annular geometry there are two edges, in the interior and the exterior of the annulus, and therefore four edge modes with four charges, q 1 , q 2 , q 3 , and q 4 (here we use the subscript 1,2 to denote the two counter-propagating edge modes on the interior edge, and 3,4 to denote the modes on the exterior edge. Edges 1 and 4 belong to one layer (or one spin direction) and edges 2 and 3 belong to the other layer (other spin direction); see Fig. (1a)). It will be useful below to distinguish between the integer part of q i , which we denote by n i , and the fractional part denoted by f i , to which we assign the values f i = −1/3, 0, 1/3, such that q i = n i + f i .
When a pair of counter-propagating edge modes, say with charges q 1 , q 2 , is gapped by normal back-scattering of single electrons, their total charge q 1 + q 2 is conserved. Since there is an energy cost associated with the total charge, it assumes a fixed value for all ground states. (The tunneling between the edges gaps the system and makes it incompressible, leading to an energy cost associated with a change of the total charge.) For simplicity, we fix this value to be zero, making q 1 = −q 2 . A strong back-scattering term makes n 1 − n 2 strongly fluctuating but leaves the fractional part f 1 = −f 2 fixed. As a consequence, there are three topological sectors of states that are not coupled by electron tunneling, characterized by f 1 being 0, 1/3 or -1/3.
Since each of the layers (in the double-layer system) or each spin direction (in the spin-orbit-coupling system) must have an integer number of electrons, the sums q 1 +q 4 and q 2 +q 3 must both be integers. This condition couples the fractional parts of the charges on all edges. Combining all constraints, we find that when both edges are gapped by a normal backscattering, the following conditions should be fulfilled There are three solutions for these equations describing three ground states, with f l = (−1) l p 3 , where p may take the values 0, 1, −1 and l = 1, 2, 3, and 4. When both edges are gapped by a superconductor, f 2 and f 4 change sign in Eq. (1) and the fractional parts satisfy , Finally, when one edge is gapped by a superconductor and the other by normal back-scattering, only one of the two equations labeled (1) change sign and the only possible solution is f l = 0 so that all q's must be integers, and the ground state is unique.
Formally, the degeneracy of the ground state may be shown by an explicit construction of two unitary operators, U x and U y , that commute with the low-energy effective Hamiltonian and satisfy the operator relation The existence of a matrix representation of this relation, acting within the ground state manifold, requires a degenerate subspace of minimal dimension 3. We construct such operators for the electron-hole system under the assumption that the only active degrees of freedom are those of the edge, and for the coupled wire system when we confine ourselves to an effective Hamiltonian. For both cases, one of these operators, say U x , measures the f l 's and the other operator, U y , changes the f l 's by ± 1 /3 (the sign depends on l and on the type of gapping mechanism). We choose to work with a representation of U x , U y in which both operators, projected to subspace of ground states, are independent of position.
Even when L x is infinite, a finite L y splits the degeneracy. The source of lifting of the degeneracy is tunneling of quasi-particles between the two edges of the annulus, i.e., tunneling of quasi-particles from the first to the last wire. More precisely, we find that as long as the bulk gap does not close, the only term that may be added to the low-energy Hamiltonian is of the form This term is generated by high orders of perturbation theory that lead to a transfer of quasi-particles between edges. The amplitude λ decays exponentially with the width of the system. For the wires realization this translates to an exponential decay with N , the number of wires. Other factors that determine the magnitude and phase of λ are elaborated on in the next subsection.
If L x is also finite, there will be additional terms in the Hamiltonian proportional to U x and U † x , with coefficients that fall off exponentially in L x . The physical explanation of these terms is that when L x is finite, root-meansquare fluctuations in the total charge in an edge mode are not infinite, but are proportional to L 1/2 x . This leads to energy differences between states with different values of the fractional charge f l that decrease exponentially with increasing L x C. Remnants of the degeneracy in the quasi-one dimensional regime The topological degeneracy is lifted in the transition from a two-dimensional system to a quasi-one dimensional one, but it leaves behind an imprint which can in principle be measured. This is seen when we add another parameter to the Hamiltonian. For a torus, this parameter may be the flux within the torus. For the systems we consider here, when gapped by one superconductor at the interior edge and one superconductor at the exterior edge, this parameter may be the phase difference ϕ between the two superconductors. In this case the fractional quantum Hall torus forms the insulator in a superconductor-insulator-superconductor Josephson junction.
The dependence of the spectrum on these parameters is encoded in the amplitude λ of Eq. (4). In particular, since the tunneling charge is 2/3 of an electron charge, which is 1/3 of a Cooper pair, we find that the tunneling amplitude at the point x along the junction is proportional to the phase factor e iϕ(x)/3 , where ϕ(x) is the phase difference between the two superconductors at the point x. For the fractional topological insulator, no magnetic flux is enclosed between the superconductors, and the equilibrium phase difference does not depend on x. In contrast, for the electron-hole quantum Hall realization the magnetic flux threading the electron-hole double layer makes ϕ(x) vary linearly with x, such that the phase of the tunneling amplitude winds as a function of the position of the tunneling. The amplitude λ of Eq. (4) is an integral of contributions from all points at which the FIG. 2. The spectrum of the three low energy states as a function of the phase difference ϕ between the two superconductors (see text for elaboration). The amplitude of oscillations falls exponentially with the number of wires N . For a finite N , each eigenstate has a periodicity of 6π. At the special points ϕ = πn the spectrum remains 2-fold degenerate.
If the system is of finite length Lx, the degeneracy at these points is lifted by a term that is exponentially small in Lx.
superconductors are tunnel-coupled, where T (x) is the local tunneling amplitude. When the superconductors are tunnel-coupled only at a single point (say x = 0), such that T (x) ∝ δ(x), the spectrum of the three ground states as a function of ϕ, which is now the argument of T (x = 0) can be written in the explicit form where α = 0, 1, −1 enumerates the ground states. This is shown in Fig. (2). While the amplitude t 0 is exponentially small in the width L y , or in the number of wires N , we find that the spectrum as a function of the phase difference across the junction has points of avoided crossing in which the scale of the splitting between the two crossing states is proportional to e −Lx/ξx , i.e., is exponentially small in L x (here ξ x is a characteristic scale which depends on the microscopics). Thus, in the quasi-one-dimensional regime, where L y or N are small but L x is infinite, the three states are split, but cross one another at particular values of ϕ.
Remarkably, this crossing cannot be lifted by any perturbation that does not close the gap between the three degeneracy-split ground states and the rest of the spec-trum. This lack of coupling between these states result from the macroscopically different Josephson current (from the inner edge to the outer edge) that they carry. The Josephson junction formed between the two superconductors will show a 6π-periodic DC Josephson effect for as long as the time variation of the phase is slow compared to the bulk energy gap, but fast compared to a time scale that grows as e Lx/ξx . This Josephson current distinguishes between the three ground states. This current oscillates as a function of the position of the tunneling point for an electron-hole quantum Hall system and is position-independent for the fractional topological insulator.
When tunneling between edges takes place in more than one point, T (x) in (43) is non-zero at all these points, and has to be integrated. A particularly interesting case is that of a uniform junction. In that case T (x) and the Josephson current are constant for the fractional topological insulator, while in the electron-hole doublelayer the phase of T (x) winds an integer number of times due to the magnetic flux between the superconductors, and the Josephson current averages to zero.
A magnetic coupling between the electron and hole layers, or between electrons of the two spin directions may lead to a "(fractional) spin Josephson effect", in which spin current takes the place of charge current in the Josephson effect [32][33][34]. In this case, assuming that the spin up and down electrons are polarized in the z direction, coupling between the edge modes occurs by a magnet that exerts a Zeeman field in the x−y plane. The role of the phase difference in the superconducting case is played here by the relative angle between the magnetization at the interior and exterior edge, but an interesting switch between the two systems we consider takes place. In the electron-hole quantum Hall case the direction of the magnetization is uniform along the edges and a uniform and opposite electric current flows in the two layers.
For the fractional topological insulators the edges are gapped only when for one of the edges the direction of the magnetization in the x − y plane winds as a function of position. As a consequence, in our coupled-wire model the spin current oscillates an integer number of oscillations along the junction, and thus averages to zero.
Our discussion may be extended beyond the case of ν = 1/3. For Abelian states, we find that the periodicity of the Josephshon effect is 2π/e * , where e * is the smallest fractional charge allowed in the state. In any Abelian state, this is also 2π times the degeneracy of the ground state in the thermodynamical limit.
III. GROUND STATE DEGENERACY IN THE THERMODYNAMIC 2D LIMIT
In this section we derive in detail the degeneracy of the ground state in the thermodynamic two-dimensional limit of the two systems we consider.
A. Description in term of edge modes only
The systems we consider have two edges, each of which carrying a pair of counter-propagating edge modes.
In the absence of coupling between the layers, the bosonic Hamiltonian of the edges is composed of the kinetic term Here we assumed all edge velocities to be the same and neglected small-momentum interaction between the edges, for simplicity.
The fields χ i satisfy the commutation relation πsign(l−j).
(8) Coupling between the edge modes has the form
where l, j = 1, 2 for the interior edge and l, j = 3, 4 for the exterior edge. The plus sign refers to superconducting coupling and the minus sign to normal back-scattering. The edge is gapped when the coupling constant λ is large, which we assume to be the case. The charge on the l'th edge modes is related to the winding of χ l , namely q l = (−1) l 1 2π dx∂ x χ l (x), where q l is the charge in units of the electron charge. For uncoupled edge modes, the charges q l are quantized in units of the quasi-particle charge, 1/3. When two edge-modes are coupled through a normal or superconducting coupling, the charge on each edge heavily fluctuates. However, due to the fact that only whole electrons may be transferred between edge modes on different layers, or between edge modes and an adjacent superconductor, the operators e i2πq l commute with both parts of the Hamiltonian Eqs. (7) and (9). We therefore characterize the different states according to these operators, i.e., according to the fractional part of the charge on the various edges. The fact that the total charge on each layer is an integer gives the two general constraints regardless of the mechanism for coupling the edges. Two other relations come from energy considerations, which depend on the gapping mechanism. For the case where the two edges are gapped using a superconductor it is energetically favorable to form singlets, such that Notice that if Eq. (11) is not satisfied, the edge carries a non-zero spin which cannot be screened by the superconductor. This configuration is therefore energetically costly.
In the case where both edges are gapped by normal back-scattering processes, which we refer to as the FF case, it is energetically favorable to preserve total charge neutrality because an insulating magnet cannot screen charge. This gives us the conditions Altogether, then, for the SS and FF gapping mechanisms, there are three possible values for e i2πq1 , namely 1, e i2π/3 , e i4π/3 , and the eigenvalue of this operator fixes the values of all operators e i2πq l (for l = 2, 3, 4). These operators are of course equal to the e i2πf l introduced above. In fact, the operators e i2πq l may all serve as the unitary operators U x from Eq. (3). To establish a ground state degeneracy, we need to find an operator that commutes with the Hamiltonian and varies U x . This operator is the one that transfers a charge of 1/3 in each layer (for the SS case), or charges of 1 3 , − 1 3 (for the FF case) from the interior to the exterior. For example, if we choose U x = e 2πiq1 then, Here the upper sign refers to superconducting coupling and the lower sign to coupling to a magnet. The fields χ i in (13) are all to be evaluated at the same point x.
It is easy to see that this assignment of U x , U y satisfies Eq. (3), thus establishing the ground state degeneracy of the Hamiltonian in Eqs. (7) and (9) for the cases of SS and FF gapping mechanisms. In the case where the two edges are gapped using different mechanisms (FS or SF), the only solution is the one where e i2πq l =1 (for l = 1, 2, 3, 4), and the ground state is therefore non-degenerate. For a finite system the three-fold degeneracy is split. In particular, in the quasi-1D regime in which L x is infinite and L y is finite, the splitting is a consequence of tunnel coupling between the interior and the exterior. This regime will be explored below.
Before doing that, however, we introduce the coupled wires system and study its ground state degeneracy directly.
B. The coupled wires construction for a Fractional Topological Insulator
In this Section we explain how a fractional topological insulator may be constructed from a set of coupled wires, as a result of a combination of spin-orbit coupling and electron-electron interaction. We start with the case of non-interacting electrons, in which case a 2D topological insulator is formed, and then introduce interactions that lead to the fractionalized phase.
The integer case -a non-interacting quantum spin Hall state
We consider an array of N quantum wires, with a Rashba spin-orbit coupling (see Fig. (1b)). Each wire is of length L x and has periodic boundary conditions. We tune the Rashba electric field (which we set to be in the y direction, for simplicity) such that the spin-orbit coupling of wire number n is linear with n. The resulting term in the Hamiltonian takes the form where σ z is the spin in the z direction, and u is the spinorbit coupling. The spectrum of wire number n is therefore where m is the effective mass, and k so = u m . The energy of the different wires as a function of k x is shown in Fig. (3a).
The similarity of the spectrum to the starting point of the wires construction of the QHE [9,10,35] is evident. This system is then analogous to two annuli of electrons of opposite spins subjected to opposite magnetic fields or to the electron-hole double-layer we discussed above (see Fig. (1a)).
Following the analogy with the wires construction of the QHE, we define the filling factor as where k 0 F is the Fermi momentum without a spin-orbit coupling (see Fig. (3a)).
In the "integer" case, ν = 1, the chemical potential is tuned to the crossing points of two adjacent parabolas.
We linearize the spectrum around the Fermi points, and use the usual bosonization technique to define two chiral bosonic fields φ R/L n,σ , where n is the wire index, σ is the spin index, and R (L) represents right (left) movers.
In terms of these bosonic fields, the fermion operators take the form where k ρ n,σ = −σ((2n − 1)k so + ρk 0 F ) is the appropriate Fermi-momenta in the absence of interactions and tunneling between the wires, with σ = 1 (−1) corresponding to spin up (down), and ρ = 1 (−1) corresponding to right (left) movers. The chiral fields satisfy the commutation relations Eq. (18) guarantees that the fermion fields defined in Eq. (17) satisfy Fermi-statistics.
Once we linearize the spectrum, it becomes convenient to present it diagrammatically by plotting only the Fermi-momenta as a function of the wire index. Fig. (4) shows the diagram corresponding to ν = 1 , where a right (left) mover is represented by the symbol (⊗).
One sees that single electron tunneling operators of the type 3. (a) The spectrum of a system consisting of three wires (see Fig. (1b)) with non-interacting electrons subjected to spin orbit coupling whose magnitude depends on the wire index according to Eq. (14), when tunneling between the wires is switched off. The spectra in blue, red, and green correspond to wires number 1,2, and 3. Solid lines correspond to spin-down, and dashed lines correspond to spin-up. (b) The resulting spectrum when a weak spin-conserving tunneling amplitude is switched on between the wires. The bulk is now gapped, with helical modes localized on the edges.
are allowed by momentum conservation (these operators are represented by the arrows in Fig. (4)). Noting that these operators commute with one another, the fields within the cosines may be pinned, and therefore the bulk is gapped. These terms, however, leave 4 gapless modes on wires 1 and N : In fact, the above model is a topological insulator, and the gapless helical modes are the corresponding edge modes, protected by time-reversal symmetry and charge conservation. Although our model also has a conservation of S z , this is not actually necessary to preserve the gapless edge modes. To completely gap out the spectrum, we have to gap out the two edges separately. This can be done using two mechanisms: proximity coupling of wire 1 and N to a superconductor which breaks charge conservation, or to a magnet which breaks time-reversal symmetry. The terms in the Hamiltonian that correspond to these cases are The phases δ 1 , δ N are the phases of the superconducting order parameter of the superconductors that couple to the wires 1, N respectively. The phases β 1 , β N are the angles of the Zeeman fields (which lie in the x − y plane) coupling to the wires 1, N respectively, with respect to the x-axis. As the last equation shows, for the magnetic field coupled to the n'th wire to allow for a momentumconserving back-scattering, we must have β N = −k so N x, i.e., the Zeeman field acting on the N 'th wire must rotate in the x − y plane at a period of 2π/(k so N ). This field then breaks translational invariance.
The fractional case -a Fractional Topological Insulator
We now consider the case ν = 1/3, depicted diagrammatically in Fig. (5). Single electron tunneling processes of the type we considered above do not conserve momentum (see Fig. (5)) for this filling factor, and one has to consider multi-electron processes in order to gap out the bulk. The problem is simplified if one defines new chiral fermion fields in each wire according to the transformationψ with η R/L n,σ = 2φ R/L n,σ − φ L/R n,σ , p R/L n,σ = 2k R/L n,σ − k L/R n,σ .
Strictly speaking, the operators in (21) should operate at separated yet close points in space, due to the fermionic nature of ψ R/L n,σ . It is simple to check that η σ nρ (x), η σ n ρ (x ) = 3iρπδ σ,σ δ ρ,ρ δ n,n sign(x − x ) + iπsign(n − n ) + δ n,n π σ σ,σ y + 3δ σ,σ σ ρ,ρ y . Eq. (23) implies thatψ satisfies Fermi statistics. In addition, if one draws the diagram that corresponds to the p's, the effective Fermi-momenta of theψ fields, one gets the same diagram as in the ν = 1 case (Fig. (4)). The linear transformation defined in Eq. (22) can therefore be interpreted as a mapping from ν = 1 3 for the electrons, to ν = 1 for the fermionsψ. The mapping from ν = 1/3 to ν = 1 suggests a relation between the local transformation defined in Eq. (22) and the Chern-Simons transformation that attaches two flux quanta to each electron, making it a composite fermion. This relation will be explored in a future work [36]. Single-ψ tunneling operators conserve momentum, and one can repeat the process that led to a gapped spectrum in the integer case. First, we switch on single-ψ tunneling operators of the form While these operators are simple tunneling operators in terms of theψ-fields, they represent the multi-electron processes described by the arrows in Fig. (5). In terms of theψ-fields, it is clear that one cannot write analogous interactions between electrons of opposite spins, and therefore the dominating terms are those that couple electrons with the same spins. Notice that as opposed to the integer case, these operators are irrelevant in the weak coupling limit. However, they may be made relevant if one introduces strong repulsive interactions [9][10][11], or a sufficiently strongt.
For N wires, Eqs. (24) introduces 2N − 2 tunneling terms, which gap out 4N − 4 modes, and leave 4 gapless chiral η-modes on the edges. Two counter-propagating modes are at the j = 1 wire, and two are at the j = N wire. Notice that the gapless η-fields on the edges are related to the corresponding χ-fields defined in Sec. III A by χ = η/3. Once again, these may be gapped by proximity coupling to a superconductor or a magnet. Operators of the type shown in Eq. (20), however, do not commute with the operators defined in Eq. (24). The arguments of the cosines in (20) cannot then be pinned by Eq. (24). The lowest order terms that commute with the operators in Eq. (24) arẽ Again, for the magnetic coupling to gap the edge modes on the nth wire, it must wind in the x − y plane with a period of 2π/(k so N ). The electronic density is three times smaller than in the previous case, so on average there is 1/3 of an electron per period. Guided by the analogy between the above construction and the ν = 1/3 FQH state on a torus, we expect the ground state to have a 3-fold degeneracy.
Using the present formalism, will be able to see how this degeneracy is lifted as one goes from an infinite array to the limiting case of a few wires. 5. A diagrammatic representations of the fractional case ν = 1/3. Now, we find that only multi-electron processes can gap out the bulk. The processes we consider are represented by colored arrows. In terms of the compositeψ-fields, however, the diagram corresponding the fractional case is identical to the one corresponding to the integer case ν = 1 (Fig. (4)). In this case, the complicated multi-electron processes are transformed into single-ψ tunneling operators. The transformation from ψ toψ therefore proves very useful in analyzing the fractional case.
Ground state degeneracy in the wire construction
For simplicity, we focus first on the FF case, where the analogy to the FQHE on a torus is explicit. In this case, we define the idealized Hamiltonian as wherẽ is the quadratic term that contains the non-interacting part of the Hamiltonian, and small momentum interactions (for simplicity, we consider only intra-wire small momentum interactions). We assume that all the interwire terms become relevant and acquire an expectation value. To investigate the properties of the ground state manifold, we define the two unitary operators All the η fields are functions of position x. The phase υ(x) in Eq. (28) is given by (30) Since all the operators in the sum are pinned by the bulk Hamiltonian, they may be treated as classical fields, and their value becomes x-independent in any one of the ground states. Similarly, the combination of operators (η R N,↓ − η L N,↑ + η R 1,↑ − η L 1,↓ ) which appears on the right side of Eq. (28) is pinned by the coupling to the boundary, and becomes independent of x. Therefore, the operators U y (x) may be considered to be independent of x within the manifold of ground states.
Notice that the second equality in Eq. (28) shows that U y (x) defined in terms of the wires degrees of freedom is identical to Eq. (13) (up to a phase). The form of U y (x) shown in the first equality of Eq. (28) is useful because it allows us to express U y (x) as a product of electronic operators: where the x-dependence of the operators is omitted for brevity. It can be verified that and that so that operating U y (x) or U x on a ground state leaves the system in the ground state manifold. Using Eq. (23), it can also be checked directly that independent of x. The smallest representation of this algebra requires 3 × 3 matrices [37], which shows that the ground state of the idealized Hamiltonian (26) must be at least 3-fold degenerate. The operators U y (U x ) can be interpreted as the creation of a quasiparticle-quasihole pair, tunneling of the quasiparticle across the y (x) direction of the torus and annihilating the pair at the end of the process. In fact, if we adopt this interpretation, Eq. (34) is a direct consequence of the fractional statistics of the quasiparticles [37].
A similar analysis can be carried out for the SS case. U x is identical to the operator used in the FF case, but now U y takes the form and the entire analysis can be repeated.
C. The coupled wires construction of an electron-hole double layer
In this Section we explain how one can model a quantum Hall electron-hole double layer at a fractional filling factor ν = 1/3 using a set of coupled wires. Most of the analysis is very similar to the analysis presented for the fractional topological insulator, but some technical differences are worth pointing out. We examine a system with two layers, each containing an array of wires. In one layer, the electron layer, we tune the system such that only states near the bottom of the electronic band are filled. In this case, we can approximate the spectra of the various wires as parabolas. If we add a constant magnetic field B perpendicular to the layers, and use the Landau gauge to write the electromagnetic potential as A = −Byx, the entire band structure of wire number n will be shifted by an amount 2k φ n, where k φ is defined as k φ = eBa 2 . The energy of wire number n is therefore written in the form (if we choose the position of wire number 1 to be at y = a/2) where U e is a constant term, and m is the effective mass.
In the hole layer the bands of the various wires are nearly filled, such that we can expand the energy near the max-imum as In the above, we assumed that the effective masses of the electron and the hole layers have the same magnitude and opposite signs. We assume that U h > U e , and tune the chemical potential to be µ = Ue+U h , we get the spectra where σ = 1(−1) for the electron (hole) layer. This way the system has a built-in particle-hole symmetry in its low energy Hamiltonian. Notice that as a result of the magnetic field, the spectra of the two layers are shifted in the same direction. This is a consequence of the common origin of the electron and hole spectra from a Bloch band whose shift is determined by the direction of the magnetic field. We define k 0 F = √ 2mδ , and the filling factor is now In the case ν = 1, the corresponding spectrum is given by Fig. (6a). As before, if we apply tunneling between adjacent wires in the same layer, we get the gapped spectrum in Fig. (6b). Furthermore, we see that each edge carries a pair of counter propagating edge modes (one for each layer).
It is straightforward to generalize this to the case of filling ν = 1/3, shown in Fig. (6c). To treat this case, we follow exactly the same steps as in Sec. III B: we first linearize the spectrum, and write the problem in terms of the chiral bosonic degrees of freedom φ R/L n,σ , where now σ = e, h represents the layer number, and n represents the wire index. To treat the fractional case, we define new chiral fields η R/L n,σ = 2φ R/L n,σ − φ L/R n,σ . Like before, it can be checked that these modes behave like modes at filling 1, so we can repeat the analysis performed in this case.
This process leaves us with two counter propagating ηmodes on each edge: η L 1,e , η R 1,h , η R N,e , η L N,h . These modes can be gapped out by terms analogous to the terms in Eq. (25): In contrast to the case of the fractional topological insulator, here the backscattering terms conserve momentum, i.e., do not include phases that are linear in x. Rather, the superconducting termH S N appears not to conserve momentum. However, the flux between the two superconductors will lead to a winding of the phase difference between them, which can cancel the x-dependent phase of H S N . Let us first consider the situation where the bounding superconductor wires are thin enough that there are no vortices inside them. The energy of a superconducting ring is minimized when ∆φ, the change in the superconducting phase around the ring is equal to 2eΦ, where Φ is the magnetic flux enclosed by a circle embedded at the center of the wire. The value of ∆φ is quantized in multiples of 2π, and in practice there may exist a number of metastable states where it differs from 2eΦ by a finite amount and the wire carries a supercurrent around its circumference. Let us consider a model where there is a distance a between the center of the inner most superconductor and the center of our first electronhole nanowire and a similar separation between the N th nanowire and the outer superconductor. If the centers of the nanowires are separated from each other by a distance a, then the flux Φ is equal to BaL x (N −1+2(a /a)). In this case, if the superconductors are in their ground states, we getδ 1 = −2 + 4 a a k φ x +δ 0 1 andδ N = − 4N − 2 + 4 a a k φ x +δ 0 N , whereδ 0 1(N ) do not depend on x. If a is tuned to a = a/2, the oscillating phases are eliminated from Eq. (39).
If a differs from a/2, it may be still possible to gap out the edges. If the phase mismatch is small, and if coupling to the superconductor is not too weak, then there can be an adjustment of the electron and hole occupations in the nanowires nearest the two edges, which allows the phase change around the nanowires to match the phase change in the superconductors. The energy gain due to formation of a gap can exceed the energy cost of altering the charge densities in the nanowires.
If the difference between a and a/2 is too large, then carrier densities in the inner and outer nanowires will not change enough to satisfy the phase matching condition. In this case, a variation of the magnetic field of order 1/N would eliminate the x-dependence of the phases at the cost of introducing quantum Hall quasiparticles in the bulk of the system. For large N , the density of these quasiparticles will be small. Presumably they will become localized and not take the system out of the quantum Hall plateau.
We note that the separation a can be engineered, and, in principle can even be made negative. Consider, for example, a situation where the superconducting wire sits above the plane of the nanowires, so that depending on the shape of a cross-section of the wire, its center of gravity may sit inside or outside of the line of contact to the outermost nanowire.
The situation is more complicated if the superconductors are thick enough that they contain vortices in the presence of the applied magnetic field. If the vortices are effectively pinned, however, it should be possible to achieve conditions where the electron-hole system is gapped and experiments such as Josephson current measurements can be performed.
The degeneracy of the ground states in both the SS and FF cases may be shown by defining the two operators U x and U y in exactly in the same form as we did in Sec. III B 3 (with ↓→ e and ↑→ h), and following the same analysis.
IV. MEASURABLE IMPRINT OF THE TOPOLOGICAL DEGENERACY IN QUASI-ONE DIMENSIONAL SYSTEMS
We now look at the quantum Hall double-layer system with ν = ±1/3. As long as the bulk gap does not close, in the limit of infinite L x and infinite N (or L y ) we expect deviations from the idealized Hamiltonian not to couple the three ground states. When N and L y are finite and L x is still infinite, coupling does occur, and the degeneracy is lifted.
Generally, hermitian matrices operating within the 3 × 3 subspace of ground states of the idealized Hamiltonian may all be written as combination of nine unitary where and . Note that a direct consequence of Eq. (34) is that U 3 x = U 3 y = 1 (this can most easily be understood by recalling that the operators transport quasiparticles across the torus. Acting three times with each of them is equivalent to transporting an electron around the torus, which cannot take us from one ground state to another). However, in the limit of infinite L x local operators cannot distinguish between states of different fractional charges, and therefore cannot contain the operator U x . Thus, up to an unimportant constant originating from λ 00 , deviations from the idealized Hamiltonian (projected to the ground state manifold) take the form of Eq. (4): The coefficient λ may be expressed as an integral, and we expect that the absolute value of the amplitude T (x) should fall off exponentially with N , as discussed in Section II B. One can see this explicitly in the various models we have constructed from coupled wires. For example, in the case of a fractional topological insulator with magnetic boundaries, the operator U y , according to (32 ) and (17) involves a product of factors involving four electronic creation and annihilation operators on each of the N wires. The bare Hamiltonian contains only fourfermion operators on a single wire, and two-fermion operators that connect adjacent wires, with an amplitude t that we consider to be small. The operator U y can only be generated by higher orders of perturbation theory, in which the microscopic tunneling amplitude t occurs at least 2N times. In our analysis, we have assumed that interaction strengths on a single wire are comparable to the Fermi energy E F , so we expect T to be of order |t/E F | 2N or smaller. Similar arguments apply to the other cases of superconducting boundaries or electron-hole wires. We also note that if the system is time-reversal invariant, we must have T = T * . The phase of T (x) depends on the realizationelectron-hole quantum Hall vs. fractional topological insulator -and on the gapping mechanism -two superconductors or two magnets. We start from the case of the fractional topological insulator gapped by two supercon-ductors. Eqs. (25) shows that for the edges to be gapped, the superconductors on the two edges should have uniform phasesδ 1 ,δ N . We choose a gauge whereδ 1 = 0 and denote ϕ =δ N to be the phase difference.
In the case of a fractional topological insulator, the proximity gapping terms arẽ (note that these terms involve coupling to the superconductor, and we therefore have∆ 1(N ) ∝ |∆ 1(N ) |, where ∆ 1(N ) are the corresponding superconducting order parameters). We define new bosonic fields through the additional transformatioñ andη ρ n,σ = η ρ n,σ for all the other values of n, σ, ρ. If we rewrite the Hamiltonian in terms of the new fields, the phase ϕ is eliminated from the idealized Hamiltonian. However, this modifies the operator U y (defined in Eq.(28)), which now takes the form Thus, a non-zero phase difference ϕ shifts the argument of λ in Eq. (42) by ϕ 3 . In the time reversal symmetric case λ is real, and we find, by diagonalizing ∆H, that The resulting spectrum as a function of ϕ is depicted in Fig. (2).
At ϕ = πn the degeneracy is not completely lifted, as two states remain 2-fold degenerate. These states are not coupled by the low energy Hamiltonian (42) and the lifting of their degeneracy requires terms of j = 0 in (41). Such terms distinguish between states of different edge charges f i and originate from tunneling between the three physically distinct minima of the potential (9). The amplitude for tunneling, and hence the splitting, is proportional to e −S , with S the imaginary action corresponding to the tunneling trajectory. Due to the integration over x in the Hamiltonian, this action is linear in L x , and hence the tunneling amplitude scales as e −(Lx/ξx) . Neglecting this splitting, Eq. (47) shows that all eigenstates have a 6π periodicity. A measurement of the Josephson current, given by the derivative of the energy with respect to ϕ, can detect the 6π-periodicity. Due to the exponentially small splitting at the crossing points, this property can be observed by changing the flux at a rate that is not slow enough to follow this splitting.
Note that the 6π-periodic component of the spectrum is completely determined by Eq. (34). This part of the spectrum is therefore highly insensitive to the microscopic details, and can serve as a directly measurable imprint of the topological degeneracy with only a few wires. There will also be a contribution from ordinary Cooper pair tunneling between the superconductors, which does not distinguish between the ground states and has 2π periodicity. This term will alter the detailed shapes of the three spectra but not their splitting or periodicity. In the case where time reversal symmetry does not hold, λ is not necessarily real. Consequently, the spectrum in Eq. (47) is shifted according to ϕ → ϕ + Arg (λ), and the crossing points are not constrained to be at ϕ = πn.
Similar results arise in the FF case for a quantum Hall electron-hole double layer. Now, the angle ϕ is the relative orientation angle of the Zeeman fields (which lies in the x − y plane). To be precise, if we fix the Zeeman field at wire number N to point at the x direction, and the field at wire number 1 to have an angle ϕ relative to the x direction, we get the proximity terms Similar to Eq. (45), we define new bosonic fields through the transformatioñ andη ρ n,σ = η ρ n,σ for the other fields. Again, the gapping term acting on the N 'th wire returns to its original form (with ϕ = 0), but U y becomes U y e i ϕ 3 . Therefore, the spectrum as a function of ϕ is identical to the spectrum found in the SS case.
In the other two cases, the situation is more complicated, since ϕ depends on x. For the quantum Hall electron-hole double layer gapped by superconductors ϕ increases linearly with x, due to the flux penetrating the junction between the two superconductors. For the fractional topological insulator gapped by magnets, Eq. (25) requires that β N increases linearly with x. In both cases, this winding leads to λ = dx|t(x)|e i2πnx/L+iϕ , with n an integer. A uniform tunneling amplitude |t(x)| then leads to a vanishing λ, while non-uniformity allows for a non-vanishing λ.
V. EXTENSIONS TO OTHER ABELIAN STATES
We have shown above that it is possible effectively realize experimentally the ν = 1 3 FQHE state on a torus, and that by measurement of the Josephson effect in the resulting construction we can directly measure the corresponding topological degeneracy. In this section we extend the above results to other Abelian FQHE states.
For a FQHE state described by a M × M K-matrix, there is a ground state degeneracy of d = det K on a torus, and d topologically distinct quasiparticles. Each quasiparticle is a multiple of the minimally charged quasiparticle, whose charge is e * = e d . Repeating the analysis we carried out in Sec. III, we consider an electron-hole double layer system or a fractional topological insulator, and couple the counterpropagating edge modes. Since there are now M pairs of counter-propagating modes on each edge, we need m scattering terms. We assume that these terms are all mutually commuting, that they are either all chargeconserving or all superconducting, and that the M edge modes of each layer (spin-direction) are mutually coupled. Under these assumptions, each of the four edges is characterized by one quantum number -the fractional part of the total charge f i (with i = 1, · · · , 4), which may take the values − d−1 2d , − d−3 2d , · · · , d−1 2d . Similar to the case where ν = 1/3, the requirements of a total integer charge for each layer or spin direction, together with the mechanism of gapping and the requirement to minimize the energy of the edge Hamiltonians, relate all values of f i to one another.
We work in a basis |f where the fractional charges f i are well defined. We define the unitary operator U y which transfers a single minimally charged quasiparticle, analogously to the operator defined before, such that U y |f = |(f + e * /e) mod(1) . It follows that U l y |f = |(f + le * /e) mod(1) , and that U d y = 1. We therefore have in general Again, in the quasi-1D limit where L x is infinite and N is finite, Hermitian combinations of the operators U l y are the only operators capable of lifting the degeneracy. The amplitude of these terms falls exponentially with N . In order to analyze the effects of these perturbations we consider terms of the form ∆H = where λ l ∝ e −N/ξ l is a real coefficient (note that we expect terms with l > 1 to result from higher orders in e −N . More specifically, we expect ξ l ∝ 1 l ). The summation was terminated at (d − 1)/2 because of Eq. (50) and the requirement that the Hamiltonian is hermitian. Again, the resulting spectrum depends on the realization, the gapping mechanism, and the uniformity of the tunneling amplitude. This dependence is similar to the one discussed for ν = 1/3. For example, for uniform tunneling between two superconductors separated by a fractional topological insulator, a relative phase ϕ between the two superconductors translates to δ l = ϕ e e l.
The spectrum of this Hamiltonian for the time reversal symmetric case is then with p = 1 . . . d. Each eigenstate has a 2πd-periodicity, and like the ν = 1/3 case we find that the overall periodicity is 2π times the degeneracy of the system in the thermodynamic limit. In addition, similar to the ν = 1/3 case, at the time-reversal invariant points ϕ = πn, we have degeneracy points protected by the length of the wires. For example, at ϕ = 0, we have d−1 2 pairs of states |p ,|d − p (p = 1, . . . d−1 2 ) which have the same energy. It can easily be checked from Eq. (52) that the same number of crossings occurs for any ϕ = πn. Hence if the spectrum is measured, the degeneracy d can found by simply counting the number of crossing points at ϕ = πn. Note that due to the terms with l > 1, we can have additional crossing points at ϕ = nπ. Again, if time reversal symmetry does not hold the crossing points can be shifted. One can still show that in the most general case there must be at least the same number of crossing points as the number of crossing points at ϕ = πn in the time reversal invariant case. The smallest number of degeneracy points occurs when the functions ∆E p have a single maximum and a single minimum between 0 and 2πd. In that case, the energies that correspond to two different values of p must cross at two points between 0 and 2πd. We therefore have 2 crossing points for each pair p 1 , p 2 , The total number of degeneracy points, summed over all the pairs p 1 , p 2 is therefore 2 which is the number of crossing points at all the values ϕ = πn in the time reversal invariant case. Depending on the values of λ l , we may have more than a single minimum and a single maximum, in which case we can get additional crossing points.
As an example we examine the case ν = 2/5, which can be characterized by the K-matrix The degeneracy on a torus in this case is d = 5 and the spectrum (in the time reversal invariant case) is ∆E p = 2λ 1 cos 1 5 (ϕ + 2pπ) + 2λ 2 cos 2 5 (ϕ + 2pπ) , with p = 1 . . . 5. If we take for example λ 2 /λ 1 = 0.2, the resulting spectrum is shown in Fig. (7). The spectrum corresponding to ν = 2/5 with λ2/λ1 = 0.2 as a function of the relative phase difference ϕ. The periodicity of each eigenstate is 10π. At the points ϕ = πn, we find two crossing points whose splitting falls exponentially with Lx.
VI. CONCLUSIONS
The topological degeneracy on a torus is perhaps the defining property of a fractionalized phase, and the most prominent signature of a topological order. As such, it is unfortunate that for the most accessible fractionalized phase -the Fractional Quantum Hall effect -it is impossible to to directly create a toroidal geometry, that requires magnetic monopoles. In this work we study two annular geometries that are topologically equivalent to that of a torus. One geometry is based on an electronhole double layer where the electrons and the holes are at fractional quantum Hall states of opposite filling fractions. The other is based on a fractional topological insulator at which the two spin directions of the electrons are at fractional quantum Hall states of opposite filling fractions. Both geometries carry counter-propagating edge modes on the interior and the exterior edges of the annuli, and these edge modes may be coupled and gapped in two mechanisms -back-scattering and proximity coupling to superconductors.
Considering the two dimensional regime where the annuli are too wide to have a significant coupling between the interior and the exterior edges, we established here the topological degeneracy that characterizes each of the geometries we consider, and their dependence on the gapping mechanism on each of the edges. Furthermore, we used the quantum number of the fractional charge or dipole on each of the edges to characterize the ground states.
In the regime where the annuli are narrow such that the interior and the exterior are coupled, the degenerate ground states split in energy. Searching for remnants of the topological order that survive the transition to the quasi-one dimensional regime, we studied the dependence of the spectrum of split ground states as a function of the phase difference between the two superconductors or the relative angle between the direction of magnetization of the two magnets. We find that the spectrum includes points in which the splitting is exponentially small in the circumference of the annulus, and thus is not split when the width becomes small.
At finite temperature there will be thermally excited pairs of quasiparticles and quasiholes in the bulk. When reaching the edge, these excitations carry the potential of introducing transitions between the states that cross at Figs. (2) and (7). The density of these quasiparticles and the resulting transition rates are expected to be exponentially small at low temperatures.
The spectra of Figs. (2) and (7) give rise to a remarkable experimental consequence. As long as experiments are done on timescales at which the exponentially small transitions between states at the crossing points may be neglected, the Josephson effects give a 2πd-periodicity, where d is degeneracy in the 2D thermodynamic limit. Despite the fact that the degeneracy was lifted in the quasi-1D regime, it leaves an imprint in the Josephson effect.
|
2015-07-01T16:24:15.000Z
|
2015-02-05T00:00:00.000
|
{
"year": 2015,
"sha1": "d838814b4f5befed0d4ebdade47ec5aff2cbae84",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1502.01665",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d838814b4f5befed0d4ebdade47ec5aff2cbae84",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
20349251
|
pes2o/s2orc
|
v3-fos-license
|
Kolmogorov versus Iroshnikov-Kraichnan spectra: Consequences for ion heating in the solar wind
Whether the phenomenology governing MHD turbulence is Kolmogorov or Iroshnikov-Kraichnan (IK) remains an open question, theoretically as well as observationally. The ion heating profile observed in the solar wind provides a quantitative, if indirect, observational constraint on the relevant phenomenology. Recently, a solar wind heating model based on Kolmogorov spectral scaling has produced reasonably good agreement with observations, provided the effect of turbulence generation due to pickup ions is included in the model. Without including the pickup ion contributions, the Kolmogorov scaling predicts a proton temperature profile that decays too rapidly beyond a radial distance of 15 AU. In the present study, we alter the heating model by applying an energy cascade rate based on IK scaling, and show that the model yields higher proton temperatures, within the range of observations, with or without the inclusion of the effect due to pickup ions. Furthermore, the turbulence correlation length based on IK scaling seems to follow the trend of observations better.
Introduction
Since the proton temperature in the solar wind is observed to decrease with heliocentric distance slower than predicted by adiabatic expansion, it is believed that an in situ source is required to heat the solar wind [Freeman, 1988;Richardson et al., 1995].
The heating of the solar wind in the model is provided by the dissipation of turbulent energy, which cascades from large to small scales, and is eventually dissipated at the dissipation scale. Since in steady state, the heating rate is essentially the same as the energy cascade rate in the inertial range, the precise functional form of the cascade rate is an important ingredient of the model. Although different forms of the cascade rate were considered in the early development of the model [Matthaeus et al., 1994;Hossain et al., 1995], based on the Kolmogorov theory of hydrodynamic turbulence [Kolmogorov, 1941] as well as the Iroshnikov-Kraichnan (IK) theory of incompressible magnetohydrodynamic (MHD) turbulence [Iroshnikov, 1963;Kraichnan, 1966], later work involving detailed comparisons with observations was done assuming the Kolmogorov cascade rate. Whether anisotropic MHD turbulence should follow Kolmogorov or IK scaling remains an open question and a subject of significant ongoing research. We do not concern ourselves with this fundamental question here, but investigate the consequences of Kolmogorov or IK scaling as far as the problem of proton heating is concerned, assuming that the turbulence is isotropic.
We note that while there are many observations of the spectral index of solar wind turbulence that are considered more consistent with the Kolmogorov -5/3 value, instead of -3/2 of the IK theory [e. g, Goldstein et al., 1995], there are still large enough uncertainties in these observed values that none of these theories can be ruled out definitively [for a recent review on solar wind turbulence, see Bruno and Carbone, 2005].
For examples, the observed values of the spectral index can change with time, location, and uncertainties regarding the precise extent of the inertial range. Recently, it has been reported that the spectral indices for velocity and magnetic fluctuations can be different, with velocity index closer to the IK value, and magnetic index closer to the Kolmogorov value [Podesta et al. 2006[Podesta et al. , 2007Tessein et al., 2009]. However, as pointed out in [Ng et al., 2003], although the difference between 5/3 and 3/2 is small, and not unambiguously resolvable by observations, the energy cascade rate (and thus turbulent heating rate) predicted by these two theories can have significant differences, by an order of magnitude. Therefore, looking at the effects of turbulent heating might provide another way to distinguish between these two theories.
Since the solar wind heating model based on Kolmogorov scaling has already been shown to produce good agreement with the observed ion temperature profile, one might expect that using the IK energy cascade rate would not produce good agreement since it provides a much smaller heating rate compared with the Kolmogorov rate, if the level of turbulent fluctuations is held fixed. However, a recent study suggests that the solar wind heating rate at 1 AU is more consistent with the expectation from the IK cascade rate, and is about an order of magnitude smaller than expected from the Kolmogorov cascade rate [Vasquez et al., 2007]. Since this study is carried out at one radial location, and the results are interesting as well as surprising, it is natural to ask what the results would be if the IK cascade rate is used instead of the Kolmogorov cascade rate in the solar wind heating model which attempts to make predictions of proton heating as a function of the radial distance. In this paper, we carry out this project.
Specifically, we will repeat the calculations described in Smith et al. [2001], Isenberg et al. [2003] and Smith et al. [2006], except that all terms that depend on the Kolmogorov cascade rate are replaced by those based on the IK cascade rate. The new set of evolution equations are given in Section 2. The predictions of the new equations, and comparisons with calculations based on the Kolmogorov cascade rate, are given in Section 3.
Discussion and conclusions will be presented in Section 4.
Solar Wind Heating Model
The solar wind heating model discussed in Section 1 is derived based on several strong and simplifying assumptions (see Breech et al. [2008] and other references therein). Among the principal assumptions are a steady and a spherically symmetric solar wind, an isotropic Kolmogorov scaling, a constant radial solar wind speed € V SW , and a constant Alfvén speed € V A (<< V SW ). Under these conditions, the evolution of solar wind turbulence as a function of the heliocentric distance can be modeled by the following set of equations [Smith et al., 2001[Smith et al., , 2006Isenberg et al., 2003;Isenberg, 2005]: In this model, the turbulence is characterized by two quantities: the average fluctuation energy (in Elsässer units) € Z 2 = δv 2 + δb 2 /4πρ , where € ρ = nm is the solar wind density (n and m are proton density and mass respectively), and the correlation length of the fluctuations, € λ . Note that by describing the turbulence energy by only one field, € Z 2 , we have assumed zero cross-helicity, i.e., ( ) 1/ 2 . In this paper, we will concentrate on the case with zero cross helicity, although we have also obtained similar results by generalizing the set of equations to include nonzero cross-helicity. The constant parameters € ′ A (negative) and [Hossain et al., 1995;Matthaeus et al., 1996]. The factor € Z 3 /λ in the second term on the right hand side of Eq. (1) or (3) is due to the Kolmogorov cascade rate (see discussion below in this section). Here T is the solar wind proton temperature, which evolves passively according to Eq. (3) but does not affect the evolution of € Z 2 and € λ . Note that the first term on the right hand side of Eq. (3) describes the adiabatic cooling due to the expansion of the solar wind, while the second term represents the heating due to dissipation of the turbulent energy.
As pointed out above, this set of equations is based on the assumption of a Kolmogorov cascade. In the Kolmogorov theory, the energy € δv 2 of the scale € λ is estimated to cascade to the next scale in an eddy turnover time € τ~λ /δv . Therefore, the energy cascade rate is € ε~δv 2 /τ~δv 3 /λ. On the other hand, in the IK theory of MHD turbulence, the energy cascade is inhibited by the fact that an Alfvén wave packet moving in one direction does not cascade energy to smaller scales except when it collides with another Alfvén wave packet moving in the opposite direction. However, this collision time € τ A~λ /V A is much smaller than the eddy turnover time € τ~λ /δv so that it takes many random collisions to cascade the same amount of energy. In fact, the energy cascade time can be estimated to be € τ E~τ 2 /τ A~λ V A /δv 2 , and thus € ε~δv 2 /τ E~δ v 4 /λV A . In order to examine the effect of using IK cascade, we rewrite Eqs.
(1)-(3), as follows: Note that the factor € Z 4 /λV A in the second term of the right hand side of Eq. (4) or (6) is due to the IK energy cascade rate. The form of the second term on the right hand side of Eq. (5) is due to Matthaeus et al. [1994] and Hossain et al. [1995]. Although the pickup ions terms involving Q appear formally unchanged, they too depend on the assumed energy cascade rate, as described below.
The function Q in this model is calculated using the expression where € ζ , to be determined later, is the fraction of newly ionized pickup proton energy that generates waves. Here € V SW 2 /n is the initial kinetic energy per pickup proton in the same units as € Z 2 in the plasma frame, and € dN /dt is the rate at which pickup protons are created, which can be modeled by the equation is the scale of the ionization cavity, € N 0 is the neutral hydrogen density at the termination shock, and Following Isenberg et al. [2003] and Isenberg [2005], the factor € ζ is calculated subject to the initial condition of is a scale factor calculated by taking the difference between € v(µ) and another solution of Eq. (9) using the initial the phase and group velocity of the jth wave mode resonating with the cyclotron resonant wave number where € Ω is the proton cyclotron frequency. Using the cold plasma dispersion relation € V j can be obtained by solving the third-order equation and € W j is given by Note that there is only one resonant wave mode if € µv < 1.5 3V A , and three modes otherwise. The function I(k) in Eq. (9) is determined by the one-dimensional energy spectrum of the turbulence. When the energy spectrum is Kolmogorov, I(k) is given by Note that the function A(r) does not enter the final result since it is cancelled in Eq. (9) at each position r. For the present study using the IK scaling using Eqs. (4)-(6), we need to use the IK spectrum instead, i.e., Note that the above formulation to calculate € ζ follows Isenberg [2005], which is a corrected version of the analysis in Isenberg et al. [2003]. The correction was shown in Isenberg [2005] to change the resulting solar wind temperature only slightly, and both agree well with observations.
The coefficients € ′ A and € ′ C in these two sets of equations can in principle be different, depending on the spectral index, and this variation may change the model predictions significantly. However, since we estimate them by dimensional arguments [e.g., see Breech et al., 2008], which do not depend on the spectral index explicitly, we will choose values for € ′ A and € ′ C that are the same as those used in previous studies [Smith et al., 2001[Smith et al., , 2006Isenberg et al., 2003;Isenberg, 2005], in order to have a meaningful comparison with earlier results.
Numerical Results
We now present numerical results obtained by solving Eqs. (4)-(6), with the Q term calculated using the IK scaling (15). In order to compare with previous results obtained by Smith et al. [2001] and Isenberg et al. [2003], based on Eqs. (1)-(3) with Kolmogorov scaling, we will use the same parameters as in these two earlier papers, summarized in Table 1. Also, Fig. 1 and Fig. 2 are plotted in formats very close to corresponding figures in these papers (i.e., Fig. 5 in [Isenberg et al., 2003] and Fig. 7 in [Smith et al., 2001]) for ease of comparison.
In the first case, based on parameters used in Isenberg et al. [2003], results on the proton temperature are plotted in Fig. 1. The fluctuating curve is the running average of the solar wind temperature measured over 51 days by Voyager 2 versus the heliocentric distance r. The purple dashed curve represents the prediction of temperature with only adiabatic cooling, i.e., only keeping the first term of the right hand side of Eq. (3). As is well known, this prediction is much lower than the observed temperature. This indicates the need for including a heating source in the model, e.g., the turbulence cascade, represented by the second term on the right hand side of Eq. (3) or (6).
The solid black curve in Fig. 1 is the model temperature predicted by Eqs. (1)-(3), based on the Kolmogorov scaling (14), as calculated by Isenberg et al. [2003], including the effect of pickup protons. We see that the prediction of the model agrees well with observations. Such good agreement between observations and model predictions based on the Kolmogorov theory can perhaps be interpreted by some as a confirmation of the correctness of the Kolmogorov scaling in solar wind turbulence, but our results indicate that this is not the only possible conclusion.
The green curve is obtained by again using Eqs. (1)-(3), but with Q set to zero so that we may see the effect of pickup ions. This curve is basically the same as the black curve up to around 10 AU, since the effect of pickup ions is only significant in the outer heliosphere. Beyond this distance, we see that the prediction without the Q term would be significantly lower than observations suggest, and does not show the trend of increasing temperature beyond around 20 AU. We thus see that the effect of the pickup ions in this model is indeed very important in obtaining good predictions of solar wind temperature in the outer heliosphere.
The blue curve is calculated from our model based on the IK cascade, i.e., Eqs.
(4)-(6) with € Q = 0. We see that the predictions of the proton temperature in this case are significantly larger than those given by the green curve beyond around 5 AU. In fact, the predictions given by the blue curve appear to be consistent with observations despite the exclusion of pickup ions, and fall below the observed temperature only beyond about 40 AU.
Since the black curve with Q is significantly higher than the green curve for € Q = 0, one might expect that adding the effect of pickup ions to the IK cascade rate will overestimate the proton temperature when compared with observations. However, somewhat surprisingly, the red curve, which is obtained from Eqs. (4)-(6) with Q determined using the IK scaling (15), is seen to lie only slightly above the blue curve.
This shows that the effect of turbulence generation due to pickup ions is weaker when the IK spectrum (15) is used instead of the Kolmogorov spectrum (14). We can understand this by considering the physics of the Q term, which is essentially a measure of whether pickup ions give energy to a spectrum of waves (positive Q), or gain energy from it (negative Q). A pickup ion gives energy when it interacts with a backward moving wave at smaller wave number k, but gains energy when it interacts with a forward moving wave at larger k, due to the Doppler effect as indicated in Eq. (10) (see also [Isenberg et al., 2003;Isenberg, 2005]). The strength of such interactions is proportional to the intensity of the waves. For a turbulent spectrum of waves that decrease in intensity at larger values of k, pickup ions give energy to waves and result in a positive Q term. So, when the IK spectrum, which is flatter in k space, is used, there is a stronger cancellation between the two effects, resulting in a smaller value of Q.
We have mentioned that the IK cascade rate is smaller than the Kolmogorov cascade rate, for the same level of turbulence. In view of this, the above results, which show that the IK scaling actually produces a higher temperature, might seem counterintuitive. To understand this physically, we also need to look at the comparisons of € Z 2 and € λ . We will now do so by repeating the calculation in Smith et al. [2001] using our model based on the IK cascade, since the plots of € Z 2 and € λ in [Isenberg et al., 2003] only have model outputs without observation data. Note that the model curves in the temperature plot, i.e., Fig. 2(c), are slightly different from those in Fig. 1, due to differences in parameters used, as indicated in Table 1, not because of any difference in model methods or observational data.
One unexpected result in the present study is that the heating provided by the IK cascade is actually at the same level or even greater than that obtained by using the decreases in the Kolmogorov case faster than that in the IK case. This effect is amplified by the fact that a smaller generates less turbulence (note that € ′ A < 0). This is also why the solar wind temperature in the Kolmogorov case is slightly larger than that in the IK case around 1 AU, up to about 3 AU. However, as this effect continues, € ε IK begins to catch up with € ε K when € Z 2 in the Kolmogorov case is much smaller than that in the IK case, and thus the temperature predictions by the two runs become roughly the same. At larger r, € ε IK is actually larger than € ε K when € Q = 0. € ε K gets back to about the same level as € ε IK only after we include the effect of pickup ions, since this effect is stronger when the Kolmogorov spectrum is used. Thus, after all these effects are taken into account, the predictions of the solar wind temperature are roughly the same in the two cases, despite the fact that the turbulence cascade rates of the two theories are very different.
Finally, we also consider the case presented in Smith et al. [2006], who use a more direct method of comparing predictions from the mode equations (1)-(3) with observations. In the cases discussed above, the boundary conditions on € Z 2 , € λ , and T at r = 1 AU are held fixed in obtaining predictions for all r. However, the predictions of the model are being compared with observations obtained at different positions and necessarily at different times, since the data are obtained from the same steadily moving spacecraft as it moves out towards the outer heliosphere. Therefore, an implicit assumption of the earlier studies is that the solar wind conditions are quasi-steady.
However, this assumption is not generally true. To get around this difficulty, Smith et al.
[2006] used the observed solar wind speed (which is assumed to be constant) at different positions r to determine when that fluid element actually passed through 1 AU. Then the solar wind conditions at that time at 1 AU are determined using Omnitape data, and used as boundary conditions for Eqs. (1)-(3). More detailed description of this method can be found in Smith et al. [2006]. Here we follow their method, and repeat our study.
In Fig. 3(a), the red curve is the proton temperature € T p in K as a function of heliocentric distance in AU, calculated from Eqs. (1)-(3) using the method and parameters used in Smith et al. [2006]. The discrete data points are from Voyager 2 observations. We see that the predictions from the model are consistent with observations until about 43 AU. From there to about 55 AU, the predictions are substantially lower than observations. This is identified by Smith et al. [2006] as a latitude effect, since Voyager 2 was at high latitude. The predictions beyond 55 AU are also found to be somewhat higher than observations (about a factor of two on the average). In 3(b), the curve is now calculated from Eqs. (4)-(6) using the same parameters. We see that the agreement with observations up to about 43 AU is about the same as in the case (a). At the same time, the predictions beyond 55 AU are now lower, consistent with observations. However, the discrepancy with data from 43 to 55 AU is worse. However, since the main discrepancy in this region is due to the high-latitude effect, it is hard to separate out the effects due to turbulence spectral laws. For the IK case, we also do two more test runs. In 3(c), we repeat the run as in (b), but artificially set Q to zero. We then see that predictions beyond 55 AU become lower and less consistent with observations, although not by much. Overall, this seems to suggest that the pickup ion term does possibly provide important corrections, although these corrections are not as crucial for IK scaling as they are for Kolmogorov scaling. To reinforce this point, we run the case in 3(d), where we repeat the run (b) but with the Q term calculated using a spectrum with a spectral index of 5/3 (Kolmogorov) rather than 3/2 (IK). For this case, we see that the predictions for the proton temperature are slightly higher than that in (b) for r beyond 55 AU, but not as high as the case in (a). This suggests that the most important effect of change from Kolmogorov to IK scaling is due to other terms in Eqs. (4)-(6), rather than the Q term.
Summary and Conclusions
In this paper, we have investigated the effect of turbulence scaling laws on the heating of solar wind by substituting the IK cascade rate into a solar wind turbulence evolution model, replacing the Kolmogorov cascade rate, and comparing with observational results from Voyager 2 on the solar wind temperature, turbulence energy level, and correlation length. The surprising result of this study is that the solar wind temperature predicted by using the IK cascade is comparable with that using the Kolmogorov cascade. This is true whether the effect of pickup ions (the Q term) is included or not (including the pickup ions term does seem to give slightly better results), since we show that the effect of pickup ions is weaker when the IK spectrum is used than when the Kolmogorov spectrum is used. The reason for this is principally due to the fact that the turbulence energy level ( € Z 2 ) in the IK case decays more slowly than that in the Kolmogorov case as we move out radially in the heliosphere. The predictions on the correlation length ( € λ ) in the IK case are also consistent with observations, with or without the pickup ions. This is in contrast with the Kolmogorov case, which has the correct trend only when the effect of pickup ions is excluded, but shows a qualitative discrepancy with the data when the effect of pickup ions is included.
Since this solar wind turbulence evolution model is based on drastic assumptions and the observations have significant uncertainties, the fact that the model using either cascade law has predictions that are consistent with observations does not necessarily confirm the correctness of either scaling law. However, from the present study, we do see that the IK theory produces at least as good a comparison with observations as the Kolmogorov theory. More theoretical as well as observational investigations are necessary to distinguish between the consequences for each phenomenological theory to solar wind turbulence.
There are at least two ways to improve the existing model: include the effects of cross-helicity in the presence of the IK cascade, and the effects of anisotropy. We plan to investigate these effects in the future. TABLE 1 Parameters and boundary conditions used in Fig. 5 of Isenberg et al. [2003], and Fig. 7
|
2017-09-29T09:44:33.297Z
|
2007-12-01T00:00:00.000
|
{
"year": 2011,
"sha1": "ed9eb1dbb5f3799f4f1cad30180b900ffa1d026f",
"oa_license": null,
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2009JA014377",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "94a21499185f6f9ce90934db7c41b2b4461b3bf7",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
268379903
|
pes2o/s2orc
|
v3-fos-license
|
How early environment influences the developing brain and long‐term mental health
Abstract The March 2024 issue of JCPP Advances features two neuroimaging studies that investigate links between early environmental risk factors for mental health problems, brain development and psychopathology in children and young adults. The papers provide new insights into how adverse environments and negative experiences in childhood increase risk for depression and mental health problems, and how this may or may not be mediated, or moderated, by individual differences in the brain.
INTRODUCTION
It is common to dichotomise between a biomedical model of mental health research, in which the focus lies on neurobiological, internal causes versus a psychosocial model in which external factors and social interactions are central.However, it is through our brain that we perceive and process our environment.The environment gets under the skin mainly via our brain.Therefore, biomedical and psychosocial mechanisms underlying mental health are highly intertwined.The present issue of JCPP Advances illustrates this, featuring two original research articles that investigate the relationships between environmental factors in childhood, variation in brain anatomy, and mental health problems.
The first paper, by Backhausen et al. (2024), investigated longitudinal links between negative life events in childhood, prefrontal cortex development, and depressive mood in young adulthood.The second article, by Norbom et al. (2024), identified anatomical brain patterns in children that are associated with their parent's income levels and educational attainment, and that may be potential markers of resilience to socio-economic adversity.These studies complement other recent research (Xu et al., 2023) that looking for links between external risk factors for psychopathology in childhood and individual brain differences, to understand how adverse experiences in childhood can potentially become entrenched to have a long-lasting impact on mental health throughout life.
LONG-TERM EFFECTS OF NEGATIVE LIFE EVENTS IN CHILDHOOD ON BRAIN DEVELOPMENT AND MENTAL HEALTH
Severe childhood adversity and trauma are known risk factors for depression in childhood, as well as in adulthood (Li et al., 2016).Less is known about whether the accumulation of more common, less severe negative experiences also constitute risk factors for depression throughout life.In the longitudinal IMAGEN Consortium cohort (N = 321), Backhausen et al. (2024) studied the impact of commonly encountered negative life events in childhood, ranging from illness and death in the family to social and school problems, on depressed mood in young adulthood.Negative life events prior to age 14 were found to cumulatively impact on depressive mood in young adulthood, up to 8 years later.The study thereby provides important new knowledge on the long-lasting impact of commonly experienced negative life events in childhood, in a general population cohort.
To gain insight into how this impression of negative experiences may become engrained and can continue to be impactful over so many years, Backhausen et al. tested whether the experience-depression association is mediated by altered development of the orbitofrontal cortex (OFC).The OFC is a reasonable candidate to harbour consequences of negative experiences since it plays a role in emotional valuation of external stimuli and memories (Dixon et al., 2017).The OFC has direct anatomical connections with the amygdala and hippocampus, forming a key network that is related to the causes and the remission of depression (Rolls, 2019).Moreover, the OFC and its fibre connections continue maturation into late teens and early twenties.
Therefore, this can be a vulnerable period during which interference with OFC development may have long-lasting effects.In the longitudinal study by Backhausen et al. (2024), OFC thickness was measured four times between age 14 and 22, and showed the expected developmental trajectory of thinning, attributable to synaptic pruning.A thicker OFC at age 14, and a steeper pattern of cortical thinning over time, were associated with more depressed mood at age 22.Thus, both negative experiences during childhood and adolescent OFC development were associated with depressed mood in adulthood.However, contrary to expectations, the association between negative experiences and depression was not mediated by altered 2024) is that they applied a multivariate method, linked independent component analysis (LICA; LLera Arenas et al., 2018), that is most sensitive to detect such expected non-specific effects across combinations of regional neuroimaging measures of cortical thickness, surface area, and grey-white matter contrast.
The takeaway message of their neuroimaging findings is threefold.Firstly, they confirm previous findings that children in more economically affluent families have on average slightly larger cortical surface area, particularly in the prefrontal cortex.Secondly, they found additional associations with grey-white matter contrast in widespread cortical regions; and with increased thickness, surface area and grey-white matter contrast in the insula and temporal lobe.
Especially the latter finding would be hard to detect in standard unimodal neuroimaging studies as it was driven by the combination of multiple neuroimaging metrics.Thirdly, Norbom et al. found that a pattern of larger prefrontal surface area combined with a smaller occipital surface area attenuated the association of socio-economic disadvantages with mental health problems, consistent with the possibility that children with a relatively larger prefrontal cortex may be more resilient to socio-economic disadvantages.
Whilst the idea of a resilience signature in the brain is an intriguingly positive message, the cross-sectional nature of the study, and the small effect sizes (Cohen's d < 0.15), put the findings into perspective.The importance of this study is not so much that it has direct implications for individuals or families, but rather that it makes tangible the subtle, non-specific but evidently widespread associations of environmental adversity with the brain and mental health.
IMPLICATIONS AND FUTURE DIRECTIONS
The development of the OFC.Instead, OFC development and negative This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.© 2024 The Authors.JCPP Advances published by John Wiley & Sons Ltd on behalf of Association for Child and Adolescent Mental Health.JCPP Advances.2024;4:e12230.wileyonlinelibrary.com/journal/jcv2 - of 3 https://doi.org/10.1002/jcv2.12230life experiences were independently associated with depression later in life.The neurobiological mechanisms by which early environment gets under the skin, and remains impactful over many years, appear hard to discern.They may well be subtle and distributed throughout multiple regions and tissues, and heterogeneous depending on the individual and the nature of the experience.Whole-brain studies that investigate more neuroimaging modalities in larger samples will help to further inform on this fundamental question.SEARCHING FOR BRAIN PATTERNS REFLECTING CHILDREN'S RESILIENCE TO SOCIO-ECONOMIC ADVERSITY Norbom et al. (2024) report a study of parental education levels and income on general psychopathology in 9758 children aged 9-11.They corroborate previous accounts that socio-economic adversity is (subtly but significantly) associated with mental health problems in children (Reiss, 2013), and they further investigated the potential role of neuroanatomical variation in individual vulnerability and resilience to socio-economic adversity.Previous research already indicated that socio-economic factors are associated with structural brain measures (Rakesh & Whittle, 2021).However, so far results have been often inconsistent across anatomical locations and neuroimaging measures.Socioeconomic status is a multidimensional construct that reflects a combination of many correlated factors including wealth and education; specific variables like noise levels, toxins, green space; as well as more subjective factors such as perceived social class, social support, and stress.Such a composite measure is unlikely to have very specific effects on circumscribed brain regions or tissues.More likely, the many correlated factors cumulatively and synergistically contribute to many diverse, subtle alterations across brain regions and tissues.What stands out about the present study of Norbom et al. ( current issue of JCPP Advances includes two original articles that investigated the associations of environmental risk factors which mental health and brain development.Both studies found associations of early environmental factors and prefrontal brain regions with mental health measures.A strength of the first study by Backhausen et al. (2024) is its longitudinal design spanning across >8 years of adolescence.However, the study did not find evidence for the hypothesised mediating role of OFC development as a mechanism for how negative experiences in childhood cause depression later on.The study was hypothesis-driven, and it may have missed other potentially mediating brain measures.By contrast, the data-driven approach byNorbom et al. (2024) indicates that effects of environmental adversity on the brain are not region-or modality-specific.However, this study lacked a prospective component, thereby leaving causal inference to speculation.Future research that combines these multimodal environment-sensitive brain patterns with the prospective design ofBackhausen et al. (2024), would be ideally suited to better understand the longitudinal dynamics, chronicity, and possible causality of environmental influences on brain and mental health throughout development.Establishing causal mechanisms between environmental factors, brain and mental health throughout development, is a major challenge.With the increasing availability of large population studies spurring fully data-driven research, the web of relevant risk and resilience factors is expanding.Yet, causal inference of risk factors that cluster within families is far from trivial(Cheesman et al., 2023;Plomin, 2022;Sprooten et al., 2022).Since mental health, brain structure, and most environmental factors are heritable, shared genetic vulnerability for psychopathology and environmental risks may be partly responsible for their covariation across families.The causal pathways that can explain the observed brain-environment-mental health correlations are many.For example,: (a) inherited brain vulnerabilities combined with family-specific risk environments cause mental health problems; (b) the inheritance of genetic risk factors simultaneously increase the likelihood of encountering environmental risk factors, associated brain patterns, and mental health conditions; (c) mental health problems of the parents cause adverse circumstances for the children which affects their brain development and later mental health problems.Future research that incorporates genetics and neuroimaging to map this gene-environment interplay within and between families, is promising to map the possible causal effects and their reciprocity throughout development.The two papers reviewed here are important indicators of what is to come next.The quest to identify which modifiable factors are relevant for whom, in which family, and under which circumstances, requires a deep appreciation and integration of psycho-socialsocietal and biomedical perspectives.By studying the effects of environmental risk factors (i.e.negative life events and socio-
|
2024-03-15T05:10:52.498Z
|
2024-02-28T00:00:00.000
|
{
"year": 2024,
"sha1": "a2079053e12f45ddc8f0d9f835477f7160577b42",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/jcv2.12230",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2079053e12f45ddc8f0d9f835477f7160577b42",
"s2fieldsofstudy": [
"Psychology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
139503280
|
pes2o/s2orc
|
v3-fos-license
|
Understanding the Machinability of Austempered Ductile Iron (ADI)
Guidelines for production milling, turning and drilling of the standard grades of austempered ductile irons (ADI) have been established. Electron Backscatter Diffraction (EBSD) characterization has clearly shown that severe plastic deformation in the machining-affected-zone, ahead of and beneath the cutting tool, will cause strain-induced martensitic transformation of the austenite in the ausferrite structure that inhibits machinability. This phenomenon is particularly of concern during finish machining where small depths of cut are strongly influenced by surface martensite from prior machining passes.
Introduction
A lack of information of machining of ADI is a startling fact in light of the growing interest in the use of ADI as an engineered material. Although narrow machining studies of ADI have been conducted, the state-of-the-art knowledge on machining ADI is still very scarce. The only machining recommendations that are currently available are provided by Klocke [1] for turning only and by Zanardi [2].
Although it is true that ADI can be all machined prior to heat treatment as an as-cast ductile iron or rough machined before and final machining after heat treatment, it is important to be able to fully machine ADI after heat treatment. The direct machining of fully heat treated ADI is especially necessary when lead time reduction and high dimensional tolerances are required.
Additional analysis on the phase transformation when machining ADI was performed. It has been suggested that the poor machinability of ADI is due to its low thermal conductivity, its tendency to work harden, and its susceptibility to martensitic transformation. The stabilized austenite in the ADI matrix structure, though thermally stable at room temperature, can be expected to have limited stability when subjected to large mechanical strains. These high plastic strain conditions can be expected at and just below a machined ADI surface in what maybe referred to as the machining-affected-zone (MAZ).
Researchers have observed that low-depth-of-cut machining operations after rough cutting of ADI are difficult to perform, but there has been no direct observation of the surface martensite transformations that can be expected to decrease machinability. Aslantas [3] used cutting force measurements as an indirect indicator of the presence of martensitic transformation. Meena [4] interpreted an increase in the hardness measurements as a sign of increasing amount of transformed austenite. Polishetty [5] measured the weight percent of martensite using XRD in addition to the cutting forces and heat-tinting metallographic analysis. EBSD will be shown to be the improved technique to directly identify the presence of transformed and deformed layer in the MAZ of ADI.
Material Characterizations
The resultant mechanical properties for the Grade 1 ADI and Grade 3 ADI obtained per ASTM E8/E8M-13a are shown in Table 1. Both Grade 1 ADI and Grade 3 ADI met the minimum mechanical properties of Grade 1 ADI and Grade 3 ADI based on ASTM A897/A897M -16. Brinell hardness testing was completed using a 3000 kgf load with 10 mm tungsten carbide ball indenter. In addition, the hardness was measured on the standard ductile iron Grade 100-70-03 (ascast), which was used as the reference material in facing and drilling experiments. Similarly, a common AISI 4340 steel was used as a comparison reference material for milling experiments. The hardness of the as-cast 100-70-03 ductile iron was 270 HBW, which was lower than all of the grades of ADI studied. Meanwhile, AISI 4340 steel had a bulk hardness similar to that of the Grade 2 ADI, 351 HBW. The percent austenite for Grade 1 ADI, Grade 2 ADI and Grade 3 ADI obtained by XRD were 40%, 37% and 30%, respectively.
Experimental Methods
A series of face milling, turning and drilling experiments were conducted to evaluate the machinability of ADI. The casting scale of the ADI workpiece was removed prior to testing because the influence of surface scale was not of main interest.
Face milling experiments were carried out using a 3-insert tool configuration. The experiments were performed on a HAAS VF-2 vertical CNC Mill with the use of a large modular vise to support the workpiece (12x6x1 in). Seco ONMU090520ANTN-M14 MK2050 (TiSiN-TiAlN Nanolaminate PVD coated) was used as inserts with QuakerCool 7020-CG coolant with a 7-8% coolant concentration. The tool life experiments were conducted according to ISO 8688-2. Useful tool life was defined as the time when an insert reached the maximum flank wear penetration (VBmax) measured from the uniform wear of 0.3 mm (average wear across all teeth) or localized wear of 0.5 mm (on any individual tooth).
Turning experiments were performed on a workpiece in the shape of a hollow cylinder with an outside diameter of 6.85 inches (174 mm) and an inside diameter of 4.5 inches (114 mm) using a HAAS ST-20 CNC Lathe. Seco CNMG120408-M5, TK 2001 inserts with Ti(C,N) + Al2O3 Duratomic CVD coatings was used in the turning trials, and QuakerCool 7020-CG with 7-8% concentration was used as coolant. Useful tool life was defined as the time when the inserts reached a maximum flank wear penetration (VBmax) measured as 0.3 mm. Wear land measurements were made over intervals corresponding to a constant volume of material removed (e.g., after each pass) using a stereoscope.
Drilling experiments were carried out using a HAAS VF-3 vertical CNC Mill with thru-spindle coolant capability. A 12x6x1 in. (305x151x25 mm) plate workpiece was clamped on all four corners to allow for through-hole drilling. The drill used was a SECO CrownLoc Plus SD403-12.00/12.49-38-16R7 (Ø 12 mm) with SD400-12.00-P (PVD coated with TiSiN-TiAlN Nanolaminate) inserts. The drilling operations were run using QuakerCool 7020-CG coolant with 7-8% concentration. The tool life was defined as the time of drill fracture.
Metallographic samples used for sub-surface analysis were cut from rolled strip, milled plates and drilled plates. An ADI strip with a thickness of 0.125 in. was cut from the Grade 1 ADI plate. This strip was then rolled into 0.110 in. thickness in 10 passes. As for the milling samples, Grade 1 ADI plates were milled using a combination of cutting speeds of 787 ft/min (240 m/min) and 1575 ft/min (480 m/min), feed rates of 0.002 in/tooth (0.05 mm/tooth) and 0.006 in/tooth (0.15 mm/tooth), and depths of cut of 0.008 in (0.2 mm) and 0.06 in (1.5 mm).
312
Science and Processing of Cast Iron XI Microstructural samples were ground using 60 to 4000 grit SiC paper and then polished using 1 μm followed by 0.3 μm diamond suspension. An additional of 0.06 μm colloidal silica was used for the EBSD samples. Low load Knoop microhardness (0.3 kgf) was used to identify the change in the near-surface hardness. An EBSD imaging technique was explored to permit ferrite and martensite phases to be readily differentiated from the austenite phase.
Results and Discussion
Machining Guidelines. Table 2 summarizes the Taylor tool life models and for different grades of ADI and various machining operations. These models can be used to generate baseline machining parameters to machine commercial grades of ADI for a given expected tool life. The recommended starting parameters to machine different grades of ADI using tool life variations for cutting speed recommendations are also shown in this table. The importance of establishing this relationship is the ability to fulfill the different preferences that came from the machine shops (i.e. productivity vs. cost). When a high productivity level is required, the use of high cutting speed is desirable. However, this will also mean that the tool will wear at a faster rate, and thus, the tooling cost will increase. Expressing machining recommendations and ranges using "tool life ranges" will be an effective way for machine shops to readjust cutting speeds during initial production machining trials to establish the desired cutting speed for appropriate tool life.
Multiple linear regression analysis was performed on data obtained from the series of face milling experiments to estimate the machinability constants in a modified Taylor tool life equation (V T x f y d z = C) for Grade 1 ADI and Grade 3 ADI. From these constants, it can be observed that tool life was most sensitive to the change in cutting speed followed by the feed rate and depth of cut when machining Grade 1 ADI. In production machining, small improvements in tool life can result in significant cost savings.
While positive values for the constants x, y and z in modified Taylor tool life equations (V T x f y d z = C) are typically obtained, a negative value for the constant z (corresponding to depth of cut) was observed when machining ADI. This implied that tool life increased with a deeper depth of cut. This phenomena was in agreement with general advice that a lower cutting speed with a deeper cut is required to increase the tool life when machining ADI [6]. Small depths of cut are believed to be detrimental to tool life due to strain-induced surface martensite transformation that occurs during machining ADI. When a deeper cut is taken, the proportion of this surface martensite removed during cutting becomes less significant in comparison to the bulk of the material being removed.
Milling parameters with coated carbide and coolant recommended for milling ADI are presented in Table 2. It must be noted that 420 m/min and 380 m/min are the maximum limit of cutting speeds to mill Grade 1 ADI and Grade 2 ADI. A cutting speed of 120 m/min and 280 m/min are set to be the lower and upper boundary of cutting speeds used to mill Grade 3 ADI.
It should be pointed out that the turning tool wear studies and the initial turning recommendations in this study are based on the measurement of secondary flank wear rather than primary flank wear. The tool wear results obtained from the secondary flank wear measurements used in this study are effective and reliable because of similar measurements were observed in the steady-state wear region and a slight difference in the break-in period and failure region. However, this difference is considered insignificant. The observation of similar wear measurements on the major and minor flank faces of carbide inserts were also confirmed by Masuda et. al. [7]. This similarity in wear measurements on both flank faces is also observed when machining TRIP material such as Hadfield steel. Kuljanic et. al. [8] noticed severe wear both on the major flank face, the minor flank face and the rake face.
The recommended turning speeds from this study were comparable and slightly higher than those recommended by Zanardi [2]with the exception of Grade 3 ADI. It should also be noted that slightly higher feed rates and lower depths of cut were used in this study.
Unlike in milling and turning operations, careful data analysis suggested that the drilling feed was the main factor that affected the drill life followed by the cutting speed. An increase in the feed rate was believed to cause a higher temperature rise in the drill than an increase in the drilling speed. This effect was more pronounced with increasing material hardness.
In general, ADIs should be machined at 25% lower cutting speeds than conventional steels with comparable bulk hardness. Although similar patterns of flank wear were observed when machining ADI and steel of similar overall hardness, higher levels of crater wear were observed when machining ADI [1]. Because of the presence of graphite in Grade 2 ADI microstructure, even though its bulk hardness was similar to that of the AISI 4340 steel, it can be expected that the matrix hardness of the Grade 2 ADI was significantly higher than its bulk hardness. Therefore, the expected tool life for AISI 4340 would last longer than that used to machine Grade 2 ADI of similar hardness overall. Similarly, appropriate cutting speed for machining the widely used Grade 1 ADI are 25% lower than cutting speeds used for grade 100-70-03 as-cast ductile iron. Machining-Affected-Zones. Although microhardness measurements are commonly used to indicate surface hardening phenomena, the machining-affected-zone (MAZ) transformed layer is very thin and it is not possible to differentiate between transformed, deformed and the overall machining-affected layer ( Fig. 1) with microhardness measurements alone. A characteristic Knoop hardness has a width of approximately 17 μm. This width is more than the expected thickness of the transformed layer, which may be only 5-10 μm in depth. Furthermore, the required distance from specimen edge must also be considered when performing hardness measurements. Fig. 2(a) shows surface microhardness measurements for Grade 1 ADI compared to Grade 1 ADI rolled from an initial thickness of 0.125 in. to a final thickness of 0.110 in (12.7% plastic strain). An average hardness for untreated Grade 1 ADI was found to be 387 HK (Knoop hardness). Increased surface hardness measurements for the rolled sample can be expected due to strain hardening of the surface layers. Average microhardness for the rolled near the surface at a distance of less than 400 μm (0.015 in) and under the rolled surface at a distance of greater than 400 μm (0.015 in) were 503 HK and 441 HK respectively. While some strain-hardening of the ausferrite is suggested from the microhardness measurements, this method cannot be used to clearly identify any possible austenite transformation to martensite, especially in the surface zone in the near surface layer less than 50 μm below the surface. This drawback is reflected in surface microhardness measurements for Grade 1 ADI shown in Fig. 2(b). Since the first microhardness measurements were taken at 50 μm below the surface, there are no sign of strain-hardening detected. Any microstructural transformation and deformation occurred within 50 μm below the surface was not captured by the microhardness measurements. It must also be noted that the microhardness measurements on ADI are challenging due to the presence of subsurface graphite. Fig. 3 shows the microhardness measurements of Grade 1 ADI drilled at different machining parameters. Significantly high microhardness measurements were observed when the tests were done at 10 μm below the machined surface. However, these measurements might be inaccurate due to insufficient distance from the sample edge. In addition, while higher microhardness measurements indicate some strain-hardening of ausferrite, it does not mean that martensitic transformation also occurs.
An increase in hardness measurements was found on Grade 1 ADI up to 150 μm (0.006 in) from the drilled surface. The higher hardness measurements was found at low feed rates (0.29 mm/rev). Therefore, low feed rates during drilling must be avoided to minimize drill dwelling and the work hardening. High surface hardness also occurred at the highest drilling speeds and feed rates with worn drills. This outcome can be expected as a result of increased plastic strain and higher cutting temperatures under these drilling conditions. Improved surface layer metallographic analysis is possible using EBSD. EBSD imaging techniques allow the ferrite and martensite phases to be readily differentiated from the austenite phase. In addition, EBSD can also be used to better identify the MAZ microstructure and texture (both deformed and transformed zone). Fig. 4 shows the EBSD image (ferrite phase highlighted) for milled sample of Grade 1 ADI at the sub-surface. EBSD readily shows the thickness of MAZ region indicated by the deformed region (color shift due to an orientation shift) and the transformed layer (more ferrite due to transformation of austenite to martensite). Martensite is visible in the ferrite window because of its BCT (body-centered tetragonal) crystalline structure. This image not only shows the change in the grain orientation of ADI but also clearly shows that phase transformation of deformed austenite to martensite occurred on the sub-surface of the milled ADI. A total thickness of 12 μm MAZ with 4 μm deep of transformed region is clearly observed in this milling sample.
Conclusions
Comprehensive machining recommendations for the commercial grades of ADI have been generated. This information is critical to drive further applications of ADI as an engineering material. Strain-induced martensitic transformation of the ausferrite is observed on prior machined surfaces. Sufficient depth of cut and feed rate must be used when machining ADI to not only maintain the cut below the surface martensitic layer but also below the work-hardened layer to avoid lower tool life during subsequent passes.
|
2019-04-30T13:07:48.102Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "9fc9a24d1c68eb10d187ca903a1704f60ae476bf",
"oa_license": "CCBY",
"oa_url": "https://www.scientific.net/MSF.925.311.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "973bed9ea1d3d6b38c181b5369e479ee4ce44cd6",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
16221268
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Constant Thickness Cartilage Models vs. Patient Specific Cartilage Models for an Optimized Computer-Assisted Planning of Periacetabular Osteotomy
Modern computerized planning tools for periacetabular osteotomy (PAO) use either morphology-based or biomechanics-based methods. The latter relies on estimation of peak contact pressures and contact areas using either patient specific or constant thickness cartilage models. We performed a finite element analysis investigating the optimal reorientation of the acetabulum in PAO surgery based on simulated joint contact pressures and contact areas using patient specific cartilage model. Furthermore we investigated the influences of using patient specific cartilage model or constant thickness cartilage model on the biomechanical simulation results. Ten specimens with hip dysplasia were used in this study. Image data were available from CT arthrography studies. Bone models were reconstructed. Mesh models for the patient specific cartilage were defined and subsequently loaded under previously reported boundary and loading conditions. Peak contact pressures and contact areas were estimated in the original position. Afterwards we used a validated preoperative planning software to change the acetabular inclination by an increment of 5° and measured the lateral center edge angle (LCE) at each reorientation position. The position with the largest contact area and the lowest peak contact pressure was defined as the optimal position. In order to investigate the influence of using patient specific cartilage model or constant thickness cartilage model on the biomechanical simulation results, the same procedure was repeated with the same bone models but with a cartilage mesh of constant thickness. Comparison of the peak contact pressures and the contact areas between these two different cartilage models showed that good correlation between these two cartilage models for peak contact pressures (r = 0.634 ∈ [0.6, 0.8], p < 0.001) and contact areas (r = 0.872 > 0.8, p < 0.001). For both cartilage models, the largest contact areas and the lowest peak pressures were found at the same position. Our study is the first study comparing peak contact pressures and contact areas between patient specific and constant thickness cartilage models during PAO planning. Good correlation for these two models was detected. Computer assisted planning with FE modeling using constant thickness cartilage models might be a promising PAO planning tool when a conventional CT is available.
Introduction
Periacetabular osteotomy (PAO) is an established surgical intervention for treatment of hip dysplasia and acetabular retroversion [1,2]. During the procedure, the acetabulum is reoriented in order to optimize the containment of the femoral head and the pressure distribution between acetabulum and femoral head for reduction of the peak contact pressures within the joint. The goal of acetabular reorientation is to restore or to approximate normal acetabular geometry. In order to achieve this, two types of planning strategies have been reported, which can be divided into morphology-based planning methods and biomechanics-based planning methods. Morphology-based planning uses standard geometric parameters, which have shown their importance for quantification of acetabular under-or overcoverage [3]. Several authors have described different morphology-based planning methods which range from simplified two-dimensional planning [4][5][6] to complex three-dimensional planning applications [7][8][9][10][11]. Other authors presented biomechanics-based planning methods. Different approaches have been presented using for example Discrete Element Analysis (DEA) [12], or the more sophisticated Finite Element Analysis (FEA) [13,14]. In literature, both constant thickness cartilage models [14] and patient specific cartilage models [15] have been suggested. In the clinical routine, knowledge of patient specific cartilage is rarely available, since special imaging protocol (e.g. CT arthrography or MRI with dGEMRIC, T1rho or T2 mapping) is necessary to retrieve this information. One alternative could be constant thickness cartilage model that is virtually generated from bony surface models derived from conventional CT scans. However differences between these two different cartilage models in planning of PAO using FE simulation have never been investigated. Previously, we have developed a morphology-based 3D planning system for PAO [16]. This system allows for quantification of the hip joint morphology in three dimensions, using geometric parameters such as inclination and anteversion angle, the lateral center edge (LCE) angle and femoral head coverage. It also allows for virtual reorientation of the acetabulum according to these parameters. In the current study, we enhanced this application with an additional biomechanics-based method for estimation of joint contact pressures employing FEA. In this study, we investigated the following research questions: 1. What is the optimal position of the acetabulum based on simulated joint contact pressures using patient specific cartilage models in a FE analysis?
2. Are there significant differences in joint contact pressures between patient specific cartilage model and constant thickness cartilage model in the same hip model?
System Overview
The computer-assisted planning system for PAO uses 3D surface models of the pelvis and the femur, generated out of DICOM (digital imaging and communication in medicine) data, using a commercially available segmentation program (AMIRA, Visualization Sciences Group, Burlington, MA). The system starts with a morphology based method. Employing fully automated detection of the acetabular rim, parameters such as acetabular version, inclination, LCE angle, femoral head extrusion index (EI), femoral head coverage can be calculated for a computerassisted diagnosis [16]. Afterwards, the system offers the possibility to perform a virtual osteotomy (Fig 1.A(1)) and reorientation of the acetabular fragment in a stepwise pattern. During the fragment reorientation, acetabular morphological parameters are re-computed in real-time (Fig 1.A(2)) until the desired position is achieved.
Our system is further equipped with a biomechanics-based FE prediction of changes of cartilage contact stresses, which occurs during acetabular reorientation. An optimal position of the acetabulum can be defined, once contact areas in the articulation are maximized, while at the same time peak contact pressures are minimized (Fig 1.(B)).
The respective cartilage model for the biomechanics-based FE prediction is generated from either CT arthrography data (patient specific) or using a virtually generated cartilage with predefined thickness (constant thickness).
Biomechanical Model of Hip Joint
Cartilage models. In literature, both constant thickness cartilage models and patient specific cartilage models have been employed. Zou et al. [14] used a constant thickness model and thus created a cartilage with a predefined thickness of 1.8mm, a value derived from cartilage thickness data from the literature. In contrast Harris et al. [15] introduced a CT arthrography protocol allowing for excellent visualization of patient specific cartilage. DICOM data of dysplastic hip joints, which have been CT scanned using this arthrography protocol were provided by the open source dysplastic hips image data from the Musculoskeletal Research Laboratories, University of Utah [17]. The data provider has obtained IRB approval (University of Utah IRB #10983). We used our morphology-based planning system for calculation of the acetabular morphological parameters [18], verifying true dysplasia (Table 1). We used these datasets in order to retrieve the patient specific cartilage models. The bony anatomy of the same ten specimen was then used to create the constant thickness cartilage models by expanding a constant 1.8mm thickness using 3D dilation operation on the articular surface.
Mesh Generation. Bone and cartilage surface models of the reoriented hip joints were imported into ScanIP software (Simpleware Ltd, Exeter, UK) as shown in Fig 2(A) and 2(C). Surfaces were discretized using tetrahedral elements (Fig 2(B) and 2(D)). Since the primary focus were the joint contact stresses, a finer mesh was employed for the cartilage than for the bone. Refined tetrahedral meshes were constructed for the cartilage models (*135369 elements for the femoral cartilage model, and *92791 elements for the acetabular cartilage model, using the ScanFE module (Simpleware Ltd, Exeter, UK). Cortical bone surfaces were discretized using coarse tetrahedral elements (*149120 elements for the femoral model, and *188526 elements for the pelvic model). Trabecular bone was not included in the models, as it only has a minor effect on the predictions of contact pressure as reported in another study [19]. Material property. Acetabular and femoral cartilage were modeled as homogeneous, isotropic, and linearly elastic material with Young's Modulus E = 15 MPa and Poisson's ratio ν = 0.45 [14]. Cortical bone of pelvis and femur were modeled as homogeneous, isotropic material with elastic modulus E = 17 GPa and Poisson's ratio ν = 0.3 [14].
Boundary Conditions and Loading. Tied and sliding contact constraints were used in Abaqus/CAE 6.10 (Dassault Systèmes Simulia Corp, Providence, RI, USA) to define the cartilage-to-bone and cartilage-to-cartilage interfaces, respectively. It has been reported that the friction coefficient between articular cartilage surfaces was very low (0.01-0.02) in the presence of synovial fluid, making it reasonable to neglect eventual frictional shear stresses [15,20]. The top surface of pelvis and pubic areas were fixed, and the distal end of the femur was constrained to prevent displacement in the body x and y directions while being free in vertical z direction (Fig 2(E)). The center of the femoral head was derived from a least-squares sphere fitting and was selected to be the reference node. The nodes of femoral head surface were constrained by the reference node via kinematic coupling. The fixed boundary condition model was then subjected to a loading condition as published before [21], representing a single leg stance situation with the resultant hip joint contact force acting at the reference node. Following the loading specifications suggested in another previous study [22] (Fig 2(E)), the components of joint contact force along 3 axes were given as 195N, 92N, and 1490N, respectively. In order to remove any scaling effect of body weight on the absolute value of the contact pressure, we defined a constant body weight of 650N for all subjects. The resultant force was applied, based on anatomical coordinate system described by Bergmann et al [21], whose local coordinate system was defined with the x axis running between the centers of the femoral heads (positive running from the left femoral head to the right femoral head), the y axis pointing directly anteriorly, and the z axis pointing directly superiorly.
Study 1: FE Simulation for biomechanics-based planning of PAO using patient specific cartilage model. In order to find the optimal aceatbular position, the acetabular fragment was now virtually rotated around the y axis (Fig 2(E)) in 5°increments in relation to the anterior pelvic plane (APP). This deemed to imitate a decrease in actabular inclination, as performed during actual PAO surgery (Fig 2(C)). For each increment, the predicted peak contact pressure and total contact area were directly extracted from the output of Abaqus/CAE 6.10. The resulting peak contact pressures and contact areas in the different acetabular positions were then compared and the corresponding LCE angle were measured. Optimal orientation was determined by the position yielding the maximum contact area and the minimum peak contact pressure.
Study 2: Evaluation the influences of using different cartilage models on the simulation results. After the peak pressures and contact areas had been simulated using the patient specific cartilage models, the same procedure was performed using the constant thickness cartilage models. Finally, comparison between peak pressures and contact areas between patient specific and constant thickness cartilage models was performed. Linear regression analysis was used to determine associations between the results for peak pressures and contact areas for both cartilage types. Thus, the values for the constant thickness models were the independent variables, whereas the values obtained by the patient specific models represented the dependent variables. Pearson's correlation coefficient r was interpreted as "poor" below 0.3, "fair" from 0.3 to 0.5, "moderate" from 0.5 to 0.6, "moderately strong" from 0.6 to 0.8, and "very strong" from 0.8 to 1.0. Significance level was defined as p < 0.05.
Results
While the initial contact area in the dysplastic hip was primarily located in an eccentric superolateral region of the acetabulum, an increase in LCE angle led to an enlarged and more homogeneously distributed contact area (Fig 3). At the same time, an increase in LCE angle resulted in decreased peak contact pressures. For each specimen, the optimal acetabular fragment reposition was defined as the position with minimum peak contact pressure and maximum contact area ( Table 2).
Comparison of the peak contact pressures and the contact areas between the two different cartilage models showed similar results (Table 3). Regression analysis quantitatively showed that the results obtained by the constant thickness cartilage models had good correlation with those obtained by using the patient specific cartilage models. Specifically, a moderately strong correlation was found between both cartilage models when analyzing peak contact pressures (r = 0.634 2 [0.6, 0.8], p < 0.001) (Fig 4) while a very strong correlation was also found when analyzing the contact areas between the two different cartilage models (r = 0.872 > 0.8, p < 0.001) (Fig 4(B)). For both cartilage models, the largest contact areas and the lowest peak pressures were found at the same position (Table 3)
Discussion
We used a previously validated morphology-based PAO planning system [16] to perform virtual acetabular reorientation. An additional biomechanics-based module then estimated contact areas and peak contact pressures within the joint. First we used hip joint models with patient specific cartilage models and changed the LCE angle in order to increase femoral head containment and to find the optimal position with the largest contact area and lowest peak contact pressure. The same operation was then conducted with the bone models of the same hip joints by replacing the patient specific cartilage models with virtually generated constant thickness cartilage models. In the patient specific cartilage models an increase in LCE angle led to an enlarged and more homogeneously distributed contact areas and decreased peak contact pressures. Comparison of the peak contact pressures and the contact areas between the two different cartilage models showed similar results. Regression analysis quantitatively showed moderately strong correlation between both models for peak contact pressures while very strong correlation for contact areas.
In the light of our findings, several aspects need to be discussed. We did not include the acetabular labrum in our FE analysis, however the role of the labrum during load distribution is debatable in literature. While some authors promoted inclusion of the labrum [23], other authors denied the importance of its inclusion [24]. More interestingly, Henak et al. [17] showed that the labrum has a far more significant role in dysplastic hip joints biomechanics than it does in normal hips, since it supports a large percentage of the load transferred across the joint due to the eccentric loading in dysplastic hips. The same study group in a previous study [25], however, found that the labrum only supported less than 3% of the total load across the joint in normal hips. The final goal of our study was not to measure peak contact pressures and contact areas in the originally dysplastic state of our specimen, but to find an optimal position resembling a "normal" hip joint during PAO. Hence, for this purpose disregarding the labrum was acceptable. Regarding loading conditions, a fixed body weight of 650N [21] was used, which is not patient specific. However, Zou et al. [14] justified the use of constant loading, since the relative change of contact pressure before and after PAO reorientation planning is assessed, regardless the true patient weight. Also, the applied loading conditions were derived from in vivo data from patients who underwent total hip arthroplasty (THA) [21] and thus might be just an approximation to the true loading conditions in the native joint. For simplification reasons we also did not simulate typical motion patterns such as sitting-to-standing or gait cycle. Since we only performed static loading, the conchoid shape of the hip joint, which is important, when performing dynamic loading, was also disregarded. This might be a limitation, when interpreting our results. Finally, although the CT scans were performed in the supine position and the loading condition is based on one-leg stance situation, this is not an infrequent practice [26] and previous work [27] has shown that there was no significant difference between the contact pressure in the one-leg stance reference frame and those in the supine reference frame.
Our results are reflected conclusively in the current literature. Zhao et al. [13] conducted a 3D FE analysis investigating the changes of Von Mises stress distribution in the cortical bone before and after PAO surgery. They showed the favorable stress distribution in the normal hips compared to dysplastic hips. One limitation of this study might be, that the specimens were not truly dysplastic hips. The authors created dysplasia by deforming the acetabular rim of normal hip joints. Hence, their depiction of the stress distribution in the dysplastic joint is rather an approximation. Furthermore, they used a constant thickness cartilage model. They did not estimate pressure distribution in the cartilage model but in the underlying subchondral cortical bone. Another group developed a biomechanical guiding system (BGS) [12,26,28]. In 2009 they presented a manuscript reporting on three-dimensional mechanical evaluation of joint contact pressure in 12 PAO patients with a 10 year follow-up. They measured radiologic angles and joint contact pressures in these patients pre-and postoperatively. The authors were able to show that after 10 year follow-up, peak contact pressures were reduced 1.7-fold and that lateral coverage increased in all patients. One limitation of their study is the use of discrete element analysis (DEA). Since the system was not only used for preoperative planning, but also as an intraoperative guidance system, the DEA represents a computationally-efficient method for modeling of cartilage stress by neglecting underlying bone stress. The cartilage models however remain largely approximated, since neither patient specific nor constant cartilage models are used, but a simplified distribution of spring elements is employed for cartilage simulation. Recently, Zou et al. [14] also developed a 3D FE simulation of the effects of PAO on contact stresses. They validated their method on 5 models generated from CT scans of dysplastic hips and used constant thickness cartilage models. The acetabulum of each model was rotated in 5°i ncrements in the coronal plane from the original position and the relationship between contact area and pressure, as well as Von Mises stress in the cartilage were investigated, looking for the optimal position for the acetabulum. One limitation of this study is, that acetabular reorientation was roughly performed with a commercial FE analysis software (Abaqus 1 , Dassault Systèmes Simulia Corp, USA). Unlike our morphological-based planning application, their method is thus unvalidated and does not have a precise planning tool for an accurate quantification of patient specific 3D hip joint morphology.
In conclusion, our investigation contributes well to the current state of the art. First, to the best knowledge of the authors, this is the first study to use a patient specific cartilage model for biomechanics-based planning of PAO allowing for estimation of changes of contact areas and peak pressures in truly dysplastic hips. Previous studies had either investigated normal or dysplastic hips, but never the true change during virtual reorientation of the latter. Furthermore, our results seems conclusive, since the optimal position with the largest contact areas and lowest peak pressures were found within the predefined normal values [3,29] for the investigated LCE angle. This range for safe positioning is especially important, since in real-time surgery reorientation towards the one "perfect" position might not be feasible. Finally, the comparison to constant thickness cartilage models is another novelty. Strong correlation was found for biomechanical optimization results between these two cartilage models. This is encouraging, since acquisition of patient specific cartilage requires special multiplanar arthrography imaging (e.g. CT arthrography or MRI with dGEMRIC, T1rho or T2 mapping), while constant thickness cartilage is basically always available. Although our study has its limitations and further investigation is needed, computer assisted planning with FE modeling using constant thickness cartilage might be a promising PAO planning tool providing conclusive and plausible results.
|
2016-05-12T22:15:10.714Z
|
2016-01-05T00:00:00.000
|
{
"year": 2016,
"sha1": "d2407bfc6a6548f64ec2f91ec3fa59e4a821b397",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0146452&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2407bfc6a6548f64ec2f91ec3fa59e4a821b397",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232168551
|
pes2o/s2orc
|
v3-fos-license
|
Learning Class-Agnostic Pseudo Mask Generation for Box-Supervised Semantic Segmentation
Recently, several weakly supervised learning methods have been devoted to utilize bounding box supervision for training deep semantic segmentation models. Most existing methods usually leverage the generic proposal generators (e.g., dense CRF and MCG) to produce enhanced segmentation masks for further training segmentation models. These proposal generators, however, are generic and not specifically designed for box-supervised semantic segmentation, thereby leaving some leeway for improving segmentation performance. In this paper, we aim at seeking for a more accurate learning-based class-agnostic pseudo mask generator tailored to box-supervised semantic segmentation. To this end, we resort to a pixel-level annotated auxiliary dataset where the class labels are non-overlapped with those of the box-annotated dataset. For learning pseudo mask generator from the auxiliary dataset, we present a bi-level optimization formulation. In particular, the lower subproblem is used to learn box-supervised semantic segmentation, while the upper subproblem is used to learn an optimal class-agnostic pseudo mask generator. The learned pseudo segmentation mask generator can then be deployed to the box-annotated dataset for improving weakly supervised semantic segmentation. Experiments on PASCAL VOC 2012 dataset show that the learned pseudo mask generator is effective in boosting segmentation performance, and our method can further close the performance gap between box-supervised and fully-supervised models. Our code will be made publicly available at https://github.com/Vious/LPG_BBox_Segmentation .
I. INTRODUCTION
I MAGE semantic segmentation is to label each pixel with specific semantic category. Benefited from the advances in deep learning, convolutional neural network (CNN)-based methods [1]- [6] have achieved unprecedented success on image segmentation. Fully Convolutional Networks (FCN) [7] and U-Net [8] are two representative architectures for semantic segmentation. Hierarchical convolutional features [9], [10], feature pyramids [6], [11]- [13], multi-path structure [14]- [16], and dilated convolution [5] also have shown to be beneficial to enhance feature representation ability. Nonetheless, state-of-the-art deep semantic segmentation models generally are trained in the fully supervised manner, and heavily depend on the laborious and costly pixel-level annotations of enormous images [17], [18].
Most existing methods usually leverage the generic proposal generators [34]- [36] to produce enhanced segmentation proposals for further training segmentation models. For example, dense CRF [34] is adopted in WSSL [29] and BCM [33], while GrabCut [35] and MCG [36] are combined in SDI [31] for generating more precise segmentation proposals. Obviously, these generic proposal generators [34]- [36] are not specifically designed for box-supervised semantic segmentation, while improved segmentation performance can be expected by using more precise proposal generator. Moreover, the gap between box and fully supervised learning can be completely eliminated when the proposal generator can produce the groundtruth segmentation masks. Thus, it is appealing to find a better segmentation mask generator tailored to bounding box supervision for improving segmentation performance.
In this paper, we present a learning-based pseudo mask generator (LPG) specified for box-supervised semantic segmentation for producing better segmentation proposals. We note that the parameters of several generic proposal generators, e.g., MCG [36], actually are obtained using an independent training set, e.g., BSDS500 [37]. Instead, we resort to a pixellevel annotated auxiliary dataset where the class labels are non-overlapped with those of the box-annotated dataset. Considering that PASCAL VOC 2012 [17] is usually adopted in weakly supervised semantic segmentation, we use the COCO benchmark [18] to form the auxiliary dataset by selecting the 60 object classes non-intersecting with the 20 classes in VOC 2012. The auxiliary dataset is leveraged to learn a classagnostic proposal generator, which can then be deployed to any box-annotated dataset for generating accurate segmentation proposals and improving segmentation performance. Our LPG is of great practical value due to that: (i) Public pixel-level annotated datasets, e.g., PASCAL VOC [17], MS-COCO [18], LVIS [38], Open Images [39] etc., are ready to act as the auxiliary datasets to train LPG for future box-supervised semantic segmentation tasks. (ii) In practical applications, trained LPG models can be directly applied to box-annotated datasets even with new classes and complicated scenes, without requiring any pixel-level annotations. Fig. 1. Illustration of learning stage-wise LPG models on auxiliary dataset Daux by using the EM algorithm to solve bi-level optimization problem in Eqn. (6). In the E-step, pixel-level annotations are adopted as ground-truth to train LPG. In the M-step, a deep segmentation network is trained by using the pixel-level pseudo masks generated by LPG for supervision. The alternating executions of the E-step and M-step result in stage-wise LPG, which can then be deployed to any box-annotated dataset D for generating accurate pixel-level segmentation mask and improving segmentation performance.
We further present a bi-level optimization model to make the learned proposal generator be specified for box-supervised semantic segmentation. Based on the bounding box annotations of the auxiliary dataset, we define the lower subproblem as a box-supervised semantic segmentation model. With the pixel-level annotation, the upper subproblem is defined as the learning of a class-agnostic pseudo mask generator for optimizing box-supervised semantic segmentation performance. Then an expectation-maximization (EM) algorithm is then used for joint learning of box-supervised semantic segmentator and pseudo mask generators with multiple stages, shown in Fig. 1. Moreover, stage-wise generator is adopted to cope with that the learned semantic segmentator gradually becomes more precise during training. By solving the bilevel optimization model, we can make the learned pseudo mask generator to be specified for optimizing box-supervised semantic segmentation performance. The class-agnostic setting is also beneficial to make the learned pseudo mask generator generalize well to new classes and new datasets. Subsequently, the trained LPG can be deployed to boost the performance of any segmentation backbone network. Finally, experiments are conducted on PASCAL VOC 2012 dataset [17] to evaluate our method. The results show that the learned pseudo mask generator is effective in boosting segmentation performance. For the segmentation backbones DeepLab-LargeFOV [40] and DeepLab-ResNet101 [5], our method outperforms the state-ofthe-art approaches [28], [29], [31], [33], and further closes the performance gap between box-supervised and fully-supervised models.
Generally, the main contribution of this work is summarized as follows: • A learning-based class-agnostic pseudo mask generator (LPG) is presented to produce more accurate segmentation mask specified for box-supervised semantic segmentation.
• A bi-level optimization model and an EM-based algorithm are proposed to learn stage-wise, class-agnostic pseudo mask generator from the auxiliary dataset. • Comprehensive experiments show that our proposed method performs favorably against state-of-the-arts, and further closes the performance gap between boxsupervised and fully-supervised models. The remainder of this paper is organized as follows: we briefly review relevant works of fully supervised and weakly supervised semantic segmentation methods in Section II. In Section III, the proposed LPG is presented in details along with bi-level optimization algorithm. In Section IV, experiments are conducted to verify the effectiveness of our LPG in comparison with state-of-the-art methods. Finally, Section V ends this paper with concluding remarks.
II. RELATED WORK
In this section, we briefly survey fully supervised learning, weakly supervised learning and transfer learning for semantic segmentation.
A. Fully Supervised Semantic Segmentation
Recent success in deep learning has brought unprecedented progress in fully supervised semantic segmentation [2], [3], [6], [7], [9], [12], [13], [40]. As two representative architectures, FCN [7] and U-Net [8] are very common in semantic segmentation. Then advanced network modules, e.g., dilated convolution [5], have also been introduced to enhance the representation ability of deep networks. In addition, hierarchical convolutional network [9], [10] and pyramid architecture [6], [11]- [13] have drawn much attention for enriching feature representations in semantic segmentation. In PSPNet [6], a pyramid-pooling module (PPM) is adopted with different pooling sizes. While in UperNet [13], Xiao et al. introduced multi-PPM into feature pyramid networks. In DeepLabV3+ [41], depth-wise atrous convolution is adopted in encoder-decoder to effectively utilize multi-scale contextual feature information. HRNet [42] forms parallel convolution streams to combine and exchange feature information across different resolutions. To better aggregate global and local features, DANet [43] suggests an additional position attention module to learn spatial relationships among features, and CCNet [44] replaces non-local block with a more computational efficient criss-cross attention module. However, these state-of-the-art fully supervised approaches depends heavily on pixel-level annotations of enormous training images, severely limiting their scalability. In this work, we instead focus on weakly supervised semantic segmentation.
B. Weakly Supervised Semantic Segmentation
One key issue in weakly supervised approaches is how to generate proper segmentation masks from weak labels, e.g., image-level class labels [19]- [22], [45], [46], object points [23]- [25], scribbles [26], [27] and bounding boxes [28]- [33]. Among them, bounding box annotation has attracted considerable recent attention, and generic proposal generators, e.g., CRF, GrabCut and MCG, are usually deployed. In WSSL [29], dense CRF is adopted to enhance the estimated segmentation proposals to act as latent pseudo labels in EM algorithm for optimizing deep segmentation model. In BoxSup [28], MCG algorithm [36] is applied for producing segmentation proposals. In SDI [31], segmentation proposals of GrabCut [35] and MCG [36] are intersected to generate more accurate pseudo labels. Most recently, BCM [33] exploited box-driven class-wise masking and filling rate guided adaptive loss to mitigate the adverse effects of wrongly labeled proposals, while it still heavily depends on the generic proposal generator i.e., dense CRF. While Box2Seg [47] introduces an additional encoder-decoder architecture with a multibranch attention module, which further improves the segmentation performance. In this work, we propose a learningbased pseudo mask generator specified for generating better segmentation masks in box-supervised semantic segmentation.
C. Transfer Learning for Semantic Segmentation
In general, synthetic datasets with pixel-level annotations are easier to collect, e.g., computer game dataset GTAV [48], [49] and virtual city dataset SYNTHIA [49], for fully supervised training of semantic segmentation, but they have domain gap with real-world sceneries. Thus, domain adaption approaches have been studied to narrow the distribution gap between synthetic and real-world sceneries, including fully convolutional adaptation networks [48], hierarchical weighting networks [49] and class-wise maximum squares loss [50]. Moreover, Shen et al. [51] proposed bidirectional transfer learning to tackle a more challenging task where images from both source and target domains are with image-level annotations. However, these domain adaption methods require shared class labels across domains. One exception is Trans-ferNet [52], where class-agnostic transferable knowledge is learned from source domain with pixel-level annotations to target domain with image-level labels using encoder-decoder with an attention module. But weakly supervised semantic segmentation with only image-level labels is usually inferior to that with bounding box annotations, and in [52], a general proposal generator CRF is still adopted for boosting performance. In this work, we propose a learning-based classagnostic pseudo mask generator specified for box-supervised semantic segmentation task.
III. PROPOSED METHOD In this section, we first revisit the EM-based weakly supervised semantic segmentation method, i.e., WSSL [29]. Then using the auxiliary dataset, we present our bi-level optimization formulation and the EM algorithm to learn an optimal class-agnostic pseudo mask generator. Finally, the learned pseudo mask generator (LPG) can be deployed to a boxannotated dataset for improving weakly-supervised semantic segmentation performance.
In particular, we use an auxiliary dataset with both bounding box and pixel-level annotations, i.e., D aux = {(x aux , y p aux , y b aux )}, to train LPG. Here, x aux denotes an image from D aux , while y p aux and y b aux respectively denote the pixel-level and bounding box annotations of x aux . Without loss of generality, we assume that the box annotation y b aux has the same form as the pixel-level annotation y p aux , which can be easily attained by assigning 1 to the pixels within bounding boxes and 0 to the others for each class. The learned pseudo mask generator can then be used to train deep semantic segmentation model on a box-annotated dataset To verify the generalization ability of LPG, we further assume that the semantic classes from D are not intersected with those from D aux .
A. Revisit EM Algorithm in WSSL
In WSSL [29], Papandreou et al. adopted the EM algorithms for weakly supervised learning on the dataset D. The pixellevel segmentation y p of x is treated as latent variables. Denote by θ the previous parameters of the segmentation model, and f (x; θ ) is the predicted segmentation with the parameters θ . In the EM algorithm, the E-step is used to update the latent segmentation by, where L(y p , f (x; θ )) usually is the cross-entropy loss. Then, the M-step is deployed to update the model parameter θ, For image-level weak supervision, WSSL [29] suggests both EM-Fixed and EM-Adapt methods in the E-step. For box supervision, WSSL [29] further considers Bbox-Seg to exploit dense CRF [34] for generating segmentation proposal. In [28], [31], MCG and GrabCut are used to iteratively generating proposals for training segmentation models, which can also be regarded as EM-like procedures. designed for directly optimizing weakly supervised semantic segmentation performance. Thus, instead of generic proposal generators, we aim at learning an optimized pseudo mask generator from D aux by solving a bi-level optimization model. To this end, we suggest to utilize a class-agnostic pseudo mask generation network, i.e., g(y b aux , x aux , f (x aux ; θ ); ω), to update the latent segmentation on D aux , where ω is the parameters of the pseudo mask generation network. By substituting the above equation into the M-step, we have the lower subproblem on θ, Taking the above two equations into consideration, the learned parametersθ is a function on ω, i.e.,θ(ω), and we have the pixel-level annotations on D aux . Thus, the upper subproblem on ω can be defined for optimizing the performance of the box-supervised semantic segmentation model f (x aux ;θ), When the pseudo mask generator produce the ground-truth segmentation masks, the M-step will be a fully-supervised setting, and thus the gap between box and fully supervised learning can be completely eliminated. Thus, to ease the training difficulty, we modify the upper subproblem by requiring the pseudo mask generator accurately predicts the ground-truth segmentation, resulting the following bi-level optimization formulation, ω = arg min ω L(y p aux , g(y b aux , x aux , f (x aux ;θ); ω)), where the cross-entropy loss is adopted as L in both upper and lower optimization problems.
Network Architectures. The training on D aux involves both the pseudo mask generation network and the deep segmentation network. For the deep segmentation network, we keep consistent with most existing weakly supervised semantic segmentation methods, and adopt DeepLab-LargeFOV [40] or DeepLab-ResNet101 [5] as the backbone. As for the pseudo mask generation network, we adopt the Hourglass [53] structure. It takes the input image, bounding box map y b aux for a specific class and the predicted segmentation for a specific class as the input to generate the segmentation mask of the corresponding class. Concretely, the output of the proposal generation network consists of two channels, with one for estimating the object mask inside the bounding box and the other for the remaining unrelated background.
Learning Pseudo Mask Generation Network. We further present an EM-like algorithm, as shown in Fig. 1, to alternate between updating the pseudo mask generation network and the deep segmentation network. To begin with, we first adopt y b aux for the initializationŷ p aux , which is then used to train the deep segmentation network f (x aux ; θ). In the E-step, the pseudo mask generation network takes the input image x aux , bounding box annotationŷ b aux and the updated f (x aux ; θ) as the input. And the pixel level annotation is adopted as the groundtruth to train the network. In the M-step, the deep segmentation network takes an image x aux as the input. And the segmentation proposals generated by g(y b aux , x, f (x aux ; θ ); ω) is used as the ground-truth for supervising network training. We also note that the input of the pseudo mask generation network changes during training. Along with the updating of deep segmentation network, the predicted segmentation f (x aux ; θ ) becomes more and more accurate. Consequently, the final pseudo mask generation network may only be applicable to the last stage of the deep segmentation network training, and is inappropriate for generating precise pseudo masks for the early training of deep segmentation network. As a remedy, we treat each round of the E-step and M-step as a stage and suggest to learn stage-wise pseudo mask generator. We empirically find that three or four stages are sufficient in training.
Discussion. Benefited from the bi-level optimization formulation and class-agnostic setting, our learning-based pseudo mask generator can be optimized for box-supervised semantic segmentation and is able to generalize well to new classes and new datasets. Generally, it is believed that more accurate pseudo mask generator is beneficial to the segmentation performance. In Eqn. (6), the pseudo mask generation network takes the box annotation as the input, and is trained to generate pseudo masks approaching the ground-truth. Thus, it is specified for box supervision and can be used to boost semantic segmentation performance. Moreover, the pseudo mask generation network is class-agnostic. For any given semantic class, we use the same pseudo mask generation network to generate a segmentation mask for this class. Thus, it exhibits good generalization ability and can generalize well to new classes.
One may notice that TransferNet [52] adopts a similar setting with ours, where class-agnostic transferable knowledge is learned from auxiliary dataset with pixel-level annotations to another dataset with image-level annotations. Our LPG differs from [52]: (i) LPG aims to generate pixel-level segmentation masks from bounding boxes, which provide certain object localization information to guarantee the superior performance of weakly semantic segmentation. (ii) LPG is completely learned for better generating accurate pixel-level pseudo masks, while in [52] the proposals are obtained using an attention model that also requires to adopt generic proposal generator dense CRF for boosting segmentation performance.
C. Box-Supervised Semantic Segmentation
Owing to its optimized performance and generalization ability, the learned pseudo mask generator can be readily deployed to any box-annotated dataset D for weakly supervised semantic segmentation, as shown in Fig. 2. Analogous to the EMlike algorithm, we first adopt y b for training the deep semantic segmentation network f (x; θ). Then, the learned pseudo mask generator in the first stage is used to generate more accurate segmentation mask, i.e., g(y b , x, f (x; θ); ω), which is then utilized for further training the deep semantic segmentation network. Subsequently, the learned pseudo mask generator in the second and later stages can be deployed. Finally, the trained semantic segmentation network from the last stage can be applied to produce satisfying segmentation results. For verifying the generalization ability of the learned pseudo mask generator, we assume that the semantic classes from D are not intersected with those from D aux . Undoubtedly, the learned pseudo mask generator should work better when the semantic classes from D and D aux are overlapped.
D. Generating Pseudo Masks with Multi-Classes
Finally, we discuss how to generate segmentation masks using LPG model for an image that contains multi-classes of objects. Assuming that there are total N categories of objects in the target dataset D (i.e., N = 20 for PASCAL VOC 2012 [17]). Given a stage, the LPG model g with parameters θ takes box annotations y b , input image x and current segmentation results f (x; θ ) as input, and its output is normalized by softmax function. The multi-classes masks y p are generated by p =softmax (g(y b , x, f (x; θ ); ω)), where y bi is the bounding box annotation of objects from i-th class, C i is the i-th class label and • is the entry-wise product.
Since the output of LPG is a two-channel probability map, we adopt a ψ(p) function to extract the binary mask from p, where 1 indicates the foreground and 0 denotes the background. We simply implement ψ as the numpy.argmax [54] function in Python. For overlapping objects from two classes, we calculate overlapping rate = overlapping area predicted semantic area , and assign this region to the class with higher overlapping rate.
IV. EXPERIMENTS
In this section, we first discuss the effectiveness of stagewise LPG for generating pseudo masks and the architecture of LPG network. Then, our method is compared with state-of-the-art box-supervised semantic segmentation methods in both weakly supervised and semi-supervised manners. For evaluating the segmentation performance, mean pixel Intersection-over-Union (mIoU) is adopted as the quantitative metric. Our source code and all the pre-trained models have been made publicly available at https://github.com/Vious/ LPG BBox Segmentation.
A. Experimental Setting
Dataset. In weakly supervised manner, we follow [28], [29], [31], [33] to deploy PASCAL VOC 2012 [17] dataset as D, which contains 1,464 images in training set and 1,449 images in validation set. Following [29], [31], [55], the training set is augmented to have 10,582 images for training segmentation networks, while 1,449 validation images are adopted for evaluating their performance. To train our LPG, we choose MS-COCO dataset [18] as the auxiliary dataset D aux . MS-COCO contains 80 object classes, among which 20 classes appear in PASCAL VOC 2012. To verify the generalization ability of class-agnostic LPG, we exclude these 20 classes from D aux . In semi-supervised manner, the only difference is in training segmentation networks on PASCAL VOC 2012, where the original 1,464 training images are provided with pixel-level annotations, while the other augmented 9,118 training images are provided with box-level annotations.
Training Details.
In our experiments, DeepLab-LargeFOV [40] and DeepLab-ResNet101 [5] are deployed as segmentation backbone to verify the effectiveness of LPG, keeping consistent with most existing weakly supervised semantic segmentation methods [29], [31], [33]. The parameters of DeepLab-LargeFOV [40] and DeepLab-ResNet101 [5] are initialized from VGG16 [56] and ResNet101 [57] networks trained on ImageNet [58] for classification. The training procedures of LPG and segmentation networks are carried out using Pytorch [59] on two NVIDIA GTX 2080Ti GPUs. For training LPG on the auxiliary dataset MS-COCO, the initial learning rate is set at 1.0 × 10 −4 for each training stage, and Adam [60] algorithm with β = (0.9, 0.999) is adopted for training LPG. As for training segmentation backbone on both MS-COCO and PASCAL VOC 2012, we adopt SGD as optimizer and follow the parameter settings suggested by DeepLab [5]. In particular, different learning rates are set for the four parts of a segmentation backbone, i.e., (i) last bias : bias of the last layer, (ii) last params : parameters except bias of the last layer, (iii) head bias : bias of non-last layers, (iv) head params : parameters except bias of non-last layers. The hyper-parameters for training these four parts are listed in Table I, where M ax iter is the maximum training iterations. Moreover, we adopt the Poly [5] strategy to adjust learning rates during training all the models in each stage, which decreases an initial learning rate lr by lr = lr×(1− iter M axiter ) 0.9 .
B. Ablation Study
By adopting DeepLab-LargeFOV [40] as semantic segmentation backbone, we discuss how to choose the number of stages and the architecture of LPG. Stage-wise Pseudo Mask Generators. By adopting three networks, i.e., ResNet18 [57], ResNet101 [57] and Hourglass-Net (HgNet) [53], as the architecture of LPG, we discuss the number of stages. As reported in Table II, the segmentation performance of all the three models is improved when increasing the number of stages. The performance gains from Stage 1 to Stage 3 are very significant, while the LPG by adding Stage 4 is only marginally better or even performs slightly inferior to that with three stages. From Fig. 3, the generated masks become more accurate along with increasing LPG stages, and there is little difference between Stage 3 and Stage 4. Thus, we suggest to set the number of stages as 3 to balance the segmentation performance and training time.
Fixed LPG. To further illustrate the importance of stagewise training, we have compared stage-wise LPG with the fixed final LPG for training Deeplab-LargeFOV [40] on VOC 2012. From Table II, one can see that the final LPG (HgNet-Fixed) is not the optimal choice for early stages, and segmentation results by stage-wise LPG are consistently better than fixed LPG for any stage.
LPG Architecture. As for the architecture of LPG, it is reasonable to see ResNet101 [57] with more parameters is a better choice than ResNet18 [57]. Interestingly, Hourglass-Net [53] with less parameters is superior to ResNet101. The reason may be attributed to that the multi-scale architecture in HourglassNet can better extract the pixel-level segmentation masks. Therefore, we choose HourglassNet [53] as the default architecture of LPG.
Evaluation of Pseudo Masks. The rationality of EM optimization is that (E step) LPG can gradually generate more precise pseudo labels to supervise (M step) the training of segmentation backbone. To support it, we report mIoU values of stage-wise pseudo masks generated by LPG in Table III. On PASCAL VOC 2012, pseudo masks generated by our LPG are much more precise than dense-CRF, thereby improving segmentation results.
C. Comparison with State-of-the-arts
By respectively adopting DeepLab-LargeFOV [5] and DeepLab-ResNet101 [5] as semantic segmentation backbone, our method is compared with state-of-the-art methods in both weakly supervised and semi-supervised manners. The semantic segmentation backbone is trained with 3 stages of E-M algorithm, and the results by Ours 1 , Ours 2 and Ours are produced by the trained segmentation networks from Stage 1, 2 and 3, respectively.
1) DeepLab-LargeFOV as Segmentation Backbone: In weakly supervised manner, our method is compared with TransferNet [52], WSSL [29], BoxSup [28], SDI [31] and BCM [33], where TransferNet [52] is trained by image-level class labels. The quantitative results are reported in Table IV. One can see that the semantic segmentation network from the first stage, i.e., Ours 1 , has been comparable with all the competing methods. The reason can be attributed to LPG for generating more accurate segmentation masks for training the segmentation network, while the other competing methods adopt generic proposal generators, e.g., dense CRF, MCG or GrabCut. Furthermore, the training of segmentation backbones in multiple stages leads to significant performance gains, due to that LPG can gradually produce much more accurate pixellevel pseudo segmentation masks as supervision. One may also find that the performance gap of box-supervised segmentation Table IV. Since all these weakly supervised methods did not release source codes, we show the segmentation results by Ours in comparison to those with fully supervised manner, as shown in Fig. 4. One can see that our method trained with only box-supervision can produce comparable segmentation results with fully supervised DeepLab-LargeFOV model. Moreover, TransferNet [52] is taken into comparision. Since it is based on image-level class labels, its segmentation performance is significantly inferior to those by box-supervision.
In semi-supervised manner, all the competing methods obtain performance gains than themselves in weakly supervised manner. That is to say, with only 1,464 ground-truth pixel-annotations, the semantic segmentation performance can be notably boosted. We can conclude that more accurate masks would lead to better segmentation performance. It is noteworthy that our method in weakly supervised manner is even superior to all the other competing methods in semisupervised manner, which supports that our LPG can generate more accurate pseudo masks.
2) DeepLab-ResNet101 as Segmentation Backbone: In weakly supervised manner, our method is compared with SDI [31] and BCM [33]. The comparison results are reported in Table V. Due to stronger modeling capacity of ResNet101 [57], the segmentation performance has been im- proved than that by DeepLab-LargeFOV [40]. Our segmentation network from the first stage has achieved 1.3% mIoU gain over state-of-the-art BCM, and the gain is further enlarged to 2.6% mIoU by our segmentation network from Stage 3. Fig. 4 shows the segmentation results in comparison with the model trained by fully supervised manner. Also the segmentation results by our method is comparable with those from fully supervised segmentation model. One may notice that Box2Seg [47] reports higher quantitative metrics than these competing methods, but it is not included into our comparison. This is because Box2Seg adopts stronger segmentation backbone, and its source code and trained models are not publicly available, making it infeasible to make a fair comparison. In semi-supervised manner, our method is only compared with BCM, and performs much better than BCM in terms of mIoU. Moreover, one can see that our method in weakly supervised manner achieves mIoU gain of 1.2% than BCM with semi-supervised manner, indicating the effectiveness of generating accurate pseudo masks by our LPG. The visualized segmentation results in Fig. 4 and Fig 5 indicate that our methods are very competitive with fully supervised manner.
D. Robustness and Generalization Ability
To validate the robustness of our LPG with fewer training data, we have randomly selected 75%, 50% and 25% images from COCO (60 classes) dataset to act as D aux , resulting in 85%, 60% and 30% of segmentation masks for training. LPG models, offering a new feasible perspective for improving learning with weak supervision in practical applications. Furthermore, to prove the generalization ability of our LPG, we trained LPG voc on VOC training set (20 classes), and then applied it to COCO validation set (60 classes). From Table VII, the segmentation results by LargeFOV with LPG voc is only moderately inferior (∼ 2% by mIoU) to that with LPG coco . We note that LPG coco is specifically trained on COCO training set (60 classes). The diversity and quantity of classes, scenes and images in COCO are much greater than VOC, indicating the robustness and generalization ability of LPG.
By ensuring that there is no intersected classes between D and D aux , we have validated that the trained LPG can be well generalized to new classes and new datasets. We further discuss whether the trained LPG models can be generalized to other segmentation backbones. To this end, we train another DeepLab-ResNet101 [5] segmentation network on PASCAL VOC 2012 dataset, where 3-stage LPG models (LPG ) collaborated with DeepLab-LargeFOV [40] on MS COCO are adopted to generate pseudo masks. As shown in Table VIII, Ours 1 is inferior to Ours 1 , since the pseudo masks generated by LPG from Stage 1 are not as precise as those by LPG from Stage 1. But with increasing stages, LPG can generate comparable segmentation mask, and thus the final segmentation performance is very close to Ours. The results show that the learned pseudo mask generators can also be well generalized to other segmentation backbones.
E. Failure Cases
As shown in Fig. 6, LPG fails to distinguish the black shoes (left person) and the arms (right person) from background in the first image. Also some patchses near the horse (second image) are assigned as foreground, we believe the reason is due to their similar color and texture with their surroundings, and their texture discrepancy to the main body, in which case dense-CRF [34] works even worse.
V. CONCLUSION In this paper, we proposed a learning-based pseudo mask generator (LPG) for weakly supervised semantic segmenta-tion with bounding box annotations. We formulate learning pseudo mask generator as a bi-level optimization problem, and propose an EM-like algorithm to learn stage-wise and class-agnostic LPG on the auxiliary MS-COCO dataset with pixel-level and bounding box annotations. Due to that LPG is specifically trained for box-supervised segmentation task, it can generate more accurate pixel-level pseudo masks, boosting the segmentation performance. Experimental results on have validated the effectiveness and generalization ability of LPG. Our methods significantly boost the weakly supervised segmentation performance, and further close the gap between box and fully supervised learning in semantic segmentation.
|
2021-03-10T04:32:31.371Z
|
2021-03-09T00:00:00.000
|
{
"year": 2021,
"sha1": "2a596ad129f50e49a7abaae4c38117d11b0d6deb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2a596ad129f50e49a7abaae4c38117d11b0d6deb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236944918
|
pes2o/s2orc
|
v3-fos-license
|
TUMOSPEC: A Nation-Wide Study of Hereditary Breast and Ovarian Cancer Families with a Predicted Pathogenic Variant Identified through Multigene Panel Testing
Simple Summary TUMOSPEC was designed for estimating the risk of cancer for carriers of a predicted pathogenic variant (PPV) in a gene usually tested in a hereditary breast and ovarian cancer context. Index cases are enrolled consecutively among patients who undergo genetic testing as part of their care plan in France. First- and second-degree relatives and cousins of PPV carriers are invited to participate whether they are affected by cancer or not, and are tested for the familial PPV. Genetic, clinical, family and epidemiological data are centralized at the coordinating centre. The three-year feasibility study included 4431 prospective index cases, with 19.1% of them carrying a PPV. This showed that the study logistics are well adapted to clinical and laboratory constraints, and collaboration between partners (clinicians, biologists, coordinating centre and participants) is smooth. Hence, TUMOSPEC is being pursued, with the aim of optimizing clinical management guidelines specific to each gene. Abstract Assessment of age-dependent cancer risk for carriers of a predicted pathogenic variant (PPV) is often hampered by biases in data collection, with a frequent under-representation of cancer-free PPV carriers. TUMOSPEC was designed to estimate the cumulative risk of cancer for carriers of a PPV in a gene that is usually tested in a hereditary breast and ovarian cancer context. Index cases are enrolled consecutively among patients who undergo genetic testing as part of their care plan in France. First- and second-degree relatives and cousins of PPV carriers are invited to participate whether they are affected by cancer or not, and genotyped for the familial PPV. Clinical, family and epidemiological data are collected, and all data including sequencing data are centralized at the coordinating centre. The three-year feasibility study included 4431 prospective index cases, with 19.1% of them carrying a PPV. When invited by the coordinating centre, 65.3% of the relatives of index cases (5.7 relatives per family, on average) accepted the invitation to participate. The study logistics were well adapted to clinical and laboratory constraints, and collaboration between partners (clinicians, biologists, coordinating centre and participants) was smooth. Hence, TUMOSPEC is being pursued, with the aim of optimizing clinical management guidelines specific to each gene.
Introduction
DNA-based testing has become a common part of routine clinical assessment for individuals with clinical features suggestive of a hereditary predisposition. For hereditary breast and ovarian cancer (HBOC) predisposition, clinical genetic testing has focused primarily on the two major predisposing genes: BRCA1 and BRCA2 (BRCA1/2). The identification of a germline disease-causing variant, also called "pathogenic variant", in the index case of an HBOC family (i.e., the first ascertained patient) allows her/his relatives to benefit from predictive testing and to receive genetic counselling and preventive medical management [1]. Owing to improvements in knowledge about carcinogenesis pathways and to the advent of sequencing technologies, other DNA repair genes have been confirmed or have more recently emerged as HBOC susceptibility genes, such as ATM, CHEK2, PALB2, RAD51C and RAD51D [2][3][4]. Massive parallel sequencing has also deeply changed the clinical approach to genetic testing in medical oncology. Instead of single gene testing, it provides clinicians with information about one or more germline pathogenic variants associated with disorders/syndromes in a single test. In 2018, 21,217 subjects attended one of the 149 cancer genetics clinics in France for advice about their personal and/or family history of breast and/or ovarian cancer. Among them, 18,633 index cases had a multigene panel test, and a pathogenic variant was identified in 10% of them [5].
While clinical geneticists agree to define a gene analysis as usable only if the identification of a pathogenic variant results in a health benefit for patients, very few studies have investigated the clinical validity of the inclusion of multicancer syndrome genes and of the other breast and ovarian cancer susceptibility genes in multigene panels used in the context of clinical management of HBOC family members [6]. To date, only the risks of breast and ovarian cancer for carriers of a loss-of-function (LoF) variant in PALB2 [7,8], RAD51C and RAD51D [9] have been assessed by collecting data on variant carriers and their relatives from multiple centres worldwide to reach satisfactory statistical power. The published data suggest that the breast cancer risk for PALB2 LoF carriers may overlap with that of BRCA2 pathogenic variant carriers [8]. In contrast, RAD51C and RAD51D LoF confer a moderate risk of breast cancer but a high enough risk of tubo-ovarian carcinoma, which leads to the recommendation of a risk-reducing salpingo-oophorectomy as a preventative measure in these women [9]. However, even for these three genes, gathering more family Cancers 2021, 13, 3659 4 of 18 data will help to refine the estimate of cancer risk for variant carriers. For other genes included in commercial or custom in-house HBOC multigene panels, the reliability of associated age-dependent cancer risks, and the clinical utility of testing them have not been demonstrated. Moreover, genotype-phenotype correlations, as well as other potential modifying factors, need to be investigated.
Another limitation for the use of all these genes in clinical practice is the unknown pathogenicity of many identified variants for a given disease [10,11]. Indeed, it is challenging to classify many of them as either "pathogenic", "likely pathogenic", "of uncertain clinical significance", "likely benign" or "benign". A survey conducted on 16 genes commonly included in HBOC panels (ATM, BARD1, BRIP1, CDH1, CHEK2, MRE11A, NBN, NF1, PALB2, PTEN, RAD50, RAD51C, RAD51D, STK11, TP53 and MEN1) among members of the Evidenced-based Network for the Interpretation of Germline Mutant Allele (ENIGMA) [12] confirmed that currently only a small number of genes beyond BRCA1/2 are routinely analyzed worldwide. For those, only the variants defined as "pathogenic", i.e., essentially LoF, are used in clinics [13]. Management guidelines for carriers are very conservative and the identification of such a variant does not greatly impact the usual practices based on family history. Moreover, these guidelines differ between countries, especially in regard to starting age and type of imaging, and risk-reducing surgery recommendations [13].
TUMOSPEC (for "TUMOr SPECtrum") is a family-based nation-wide study designed to measure the age-dependent cancer risk of carriers of a predicted pathogenic variant (PPV) in a gene usually included in diagnostic HBOC multigene panels. It also aims at defining the tumor spectrum associated with these genes, i.e., the variety of organs concerned by the predisposition, in order to provide consensual clinical recommendations specific to each gene. The study is conducted in partnership with the French network of family cancer clinics and molecular diagnostics laboratories that compose the Cancer and Genetic Group (http://www.unicancer.fr/en/cancer-and-genetic-group, accessed on 19 July 2021). The TUMOSPEC multigene panel includes "actionable" genes other than BRCA1 and BRCA2 (i.e., genes that are nowadays routinely tested in France in addition to the two major predisposing genes if an HBOC predisposition is suspected) and "research" genes (i.e., genes for which no consensus clinical management guidelines have been proposed thus far in France [14]). Index cases are enrolled consecutively among patients who are being offered a genetic test as part of their care plan. When an actionable variant is detected, the counselled family members are invited by the clinical geneticist to participate in the study; otherwise, family members of PPV carriers are invited by the coordinating centre that centralizes genetic, clinical, family and epidemiological data for all participants. Here, we describe the study protocol and results of the three-year feasibility study, which included 4431 prospective index cases. For families where an actionable variant was detected in the index case, 1.8 counselled relatives per family, on average, were enrolled by the clinical geneticist through the usual cascade testing. For other families where a non-actionable variant was detected, 5.7 relatives per family, on average, were enrolled by the coordinating centre. Regardless of the type of gene or the mode of invitation of relatives, we found that the proposed logistics were well adapted to clinical and laboratory constraints, and that communication between partners (clinics, laboratories, coordinating centre and participants) (The clinics and diagnostics laboratories composing the TUMOSPEC Investigators Group are shown in Appendix A) was quite smooth. Therefore, TUMOSPEC is being pursued to assess cancer risks in the families of PPV carriers. In future, the TUMOSPEC protocol may be easily adapted to other hereditary cancers or other diseases.
Index Case Definition and Eligibility of Family Members
An index case is the first member of a family being counselled who has never undergone any genetic testing in the past and who has been recommended for an HBOC multigene panel test, which includes BRCA1 and BRCA2 testing, as part of her/his care plan at enrolment in TUMOSPEC. First-and second-degree relatives and cousins from both sides of the family are eligible for the study if a variant identified in the index case fulfils the variant eligibility criteria (see Section 2.2). Index cases and relatives should be aged 18 years or older. Children of index cases, even older than 18 years, are not eligible for the study as these individuals will not be informative enough in the analyses.
Process for Invitation of Family Members Depends on the Altered Gene and on the Class of Variant
Genes included in the TUMOSPEC panel were selected by a steering committee composed of clinical practitioners, epidemiologists and molecular geneticists. A gene was selected if it had been linked with breast and/or ovarian cancer predisposition in several independent case-control or family-based studies, including studies on familial breast cancer conducted by investigators in the French population [15][16][17][18]. The TUMOSPEC multigene panel is divided into sub-panel A, which includes genes for which no consensus clinical management guidelines have been proposed thus far in France (namely ATM, BAP1, BARD1, BRIP1, CHEK2, FAM175A, FANCM, MRE11A, NBN, RAD50, RAD51B, RINT1, STK11 and XRCC2) and sub-panel B, which includes genes for which consensus clinical management guidelines exist for pathogenic variant carriers (namely CDH1, PALB2, MLH1, MSH2, MSH6, PMS2, PTEN, RAD51C, RAD51D and TP53) [14]. Therefore, the prediction on the pathogenicity of the identified variant determines how relatives of index cases are invited to participate to the study. The two protocol options are summarized in Figure 1a (sub-panel A) and 1b (sub-panel B).
If a variant eligible for TUMOSPEC is found in a gene from sub-panel A, the name of the gene is not revealed to the index case since it will not modify her/his management or the management of her/his relatives. In this situation, the coordinating centre invites directly eligible relatives (i.e., first-and second-degree relatives, and cousins) to participate in the study. If an eligible variant is detected in a gene from sub-panel B and classified as "pathogenic" or "Class 5", according to the five-tier class system defined by Plon et al. [11], the name of the gene is revealed to the index case since it will modify her/his management and the management of her/his relatives. In that case, the geneticist invites the family members to participate in the study during a genetic counselling session. If a variant in a gene from sub-panel B is classified other than "pathogenic" or "Class 5", the process for inviting the relatives of the index case is as for variants detected in genes from sub-panel A.
In addition to the index cases included prospectively in the study, some individuals carrying a variant identified through multigene panel testing prior to the recruitment of the prospective cases were also invited to participate; this was to assess more rapidly the feasibility of including relatives and to validate our logistics. They are hereafter referred to as "retrospective index cases".
Variants Eligibility Criteria
A variant identified in one of the TUMOSPEC genes is eligible for the study, and therefore considered as a PPV, if it fulfils the following criteria:
•
The effect of the variant is predicted to have a deleterious impact on the gene product function. More specifically, eligible variants are: a. Variant predicted to shorten the coding sequence of the gene (nonsense variants, small insertions/deletions (indels), canonical splice site alterations and large rearrangements leading to a truncated protein). Such variants are also referred to as "loss-of-function variants" or "LoF"; b.
Genetic alterations in which a single base pair substitution alters the genetic code, referred to as "missense variants" and in-frame indels (small insertions/ deletions that do not alter the reading frame) if: i. They have been classified as "pathogenic" or "Class 5" according to the five-tier class system defined by Plon et al. [11], by a group of experts for a specific gene (typically ClinVar or ENIGMA classification expert groups); ii.
Or an in vitro assay has demonstrated the deleterious impact on the gene product function or on splicing; iii.
Or the score obtained with the Combined Annotation Dependent Depletion (CADD) tool [26] is indicative of the deleteriousness of the variant.
Here, we considered variants with a CADD phred score ≥20 eligible for the study. A score of 20 means that the variant is among the top 1% of most deleterious substitutions when ranking all possible substitutions in the human genome. For genes for which a manually curated protein multiple sequence alignment is available on the Align-GVGD website (http://agvgd.hci.utah.edu/agvgd_input.php, accessed on 1 July 2020), namely ATM, CHEK2, MLH1, MRE11A, MSH2, MSH6, NBN, PALB2, PMS2, RAD50, TP53 and XRCC2, missense variants with Align-GVGD grade C45, C55 or C65 are also eligible [27], even if the CADD phred score is <20.
All identified variants are subject to a curation process and named according to the Human Genome Variant Society (HGVS) nomenclature before integration in the TUMO-SPEC genetic database. Although BRCA1 and BRCA2 are not part of the TUMOSPEC panel, the two genes are HBOC genes and they are therefore tested in parallel with the 24 investigated genes here. The co-occurrence of multiple eligible variants in one or more of the TUMOSPEC genes, or with a BRCA1/2 variant (either a pathogenic variant or a variant of uncertain clinical significance (VUS)), is recorded in the database.
Biological Samples and Genetic Analyses
For index cases, the TUMOSPEC panel is analysed on the DNA aliquot prepared from the same EDTA blood sample that is used to perform the routine HBOC multigene panel test; the genetic analysis is performed by the laboratory performing the BRCA1/2 test. A second blood sample, usually a sample stored on an FTA ® card, is collected to confirm the presence of the variant following routine practice. The participating laboratories analyse the full coding sequence and exon-intron boundaries of at least 15 out of 24 genes in the TUMOSPEC panel using their usual (or upgraded) hybridization capture kit and sequencing instrument (Table S1). Some heterogeneity may be introduced due to a difference in the in-house bioinformatics pipelines implemented in each laboratory. However, the standard quality procedures used for detecting variants in actionable genes are applied to all genes, and all eligible variants identified in index cases are confirmed by Sanger sequencing (for single nucleotide variants or small indels), Multiplex Ligation-dependent Probe Amplification (MLPA) [28] or Quantitative Multiplex Polymerase chain reaction of Short fluorescent Fragments (QMPSF) [29] (for large indels and rearrangements).
Relatives invited by the coordinating centre provide a saliva sample using an Oragene ® DNA sample kit (OG-500.014) and send it to the laboratory that has tested the index case. Pre-stamped envelopes are provided to the relatives so that the samples can be sent to the appropriate laboratory by postal mail; temperature-controlled conditions are not required. DNA is extracted, analysed and stored applying the standard procedures that are used for all diagnostic tests. The relative's genotype for the variant detected in the index case is determined by Sanger sequencing (for single nucleotide variants or small indels), MLPA [28] or QMPSF [29] (for large indels and rearrangements).
Blood and saliva samples, as well as DNA aliquots from all participants, are kept in the laboratory after the genetic analysis has been performed, for future research projects that will be conducted in the framework of TUMOSPEC. No systematic collection of tumor specimens is performed. However, pathology reports and information of sample storage conditions and location are collected. This information will facilitate access to the tumor samples for specific projects to come.
Data Collection and Storage
All index cases carrying an eligible variant and all relatives affected and unaffected with cancer participating in the research (whether they carry the familial variant or not) complete a questionnaire on environmental, lifestyle and personal and family history of cancer (and other diseases). This self-report questionnaire contains questions about demographics, lifestyle (alcohol intake, smoking, etc.) and medical radiation exposures, as well as gynecological and obstetric history for women. Questionnaires are collected by the coordinating centre, where data are coded, digitized and checked for inconsistencies. Requests for additional information are sent out to the participants in case some information is missing or incoherent.
A database gathers familial, clinical and epidemiological data for each participant, and another database centralizes the results of the genetic analyses performed by the laboratories. All the data are stored on secure servers in a manner guaranteeing patient anonymity. Only the staff of the coordinating centre and study have access to the epidemiological and genetic data.
For index cases, next-generation sequencing (NGS) data (FASTQ format) are also centralized by the coordinating centre for future downstream analyses. This will allow, for example, the comparison of the performance of the different routine bioinformatics pipelines in terms of NGS read quality control, NGS read alignment and reference mapping, variant frequency measurements, analytical sensitivity and specificity, and variant annotation tools in order to harmonize the reporting of the variants.
Data Collected
A total of 37 family cancer clinics and 16 molecular diagnostics laboratories participated in the feasibility study (Tables S1 and S2). The recruitment of index cases started in September 2017 and ended in December 2019, and the recruitment of relatives ended in July 2020. In total, 4431 prospective and 71 retrospective index cases were recruited. Figure 2 shows the dynamics of recruitment of the prospective index cases during the feasibility The epidemiology questionnaire was sent to 812 participants (417 index cases and 246 relatives from the prospective dataset, and 71 index cases and 78 relatives from the retrospective dataset). By July 2020, 193 (39.5%) index cases and 200 (61.7%) relatives had completed and returned their questionnaire ( Table 1). The difference in return rates between index cases and relatives ( Figure S1) may be explained by the fact that the index cases sent back their questionnaire to the coordinating centre together with a list of their relatives who were eligible for the study. Before doing so, the index cases first contacted their relatives to inform them about the study and protocol, and requested their permission to provide their mail addresses and other contact details to the investigators, which meant a much longer delay in returning the questionnaire for the index cases than for the relatives.
Relatives invited by the coordinating centre received the epidemiological questionnaire along with an Oragene kit for saliva collection. Fifty percent of relatives who returned the questionnaire to the coordinating centre did so in less than 27 days (range: 5-397), and the delay in sending their saliva sample to the diagnostic laboratory was similar (median: 24 days, range 2-384). Of note, the delay in sending the saliva sample to the laboratory was calculated after excluding individuals who sent it after 16 March 2020, The epidemiology questionnaire was sent to 812 participants (417 index cases and 246 relatives from the prospective dataset, and 71 index cases and 78 relatives from the retrospective dataset). By July 2020, 193 (39.5%) index cases and 200 (61.7%) relatives had completed and returned their questionnaire ( Table 1). The difference in return rates between index cases and relatives ( Figure S1) may be explained by the fact that the index cases sent back their questionnaire to the coordinating centre together with a list of their relatives who were eligible for the study. Before doing so, the index cases first contacted their relatives to inform them about the study and protocol, and requested their permission to provide their mail addresses and other contact details to the investigators, which meant a much longer delay in returning the questionnaire for the index cases than for the relatives.
Relatives invited by the coordinating centre received the epidemiological questionnaire along with an Oragene kit for saliva collection. Fifty percent of relatives who returned the questionnaire to the coordinating centre did so in less than 27 days (range: 5-397), and the delay in sending their saliva sample to the diagnostic laboratory was similar (median: 24 days, range 2-384). Of note, the delay in sending the saliva sample to the laboratory was calculated after excluding individuals who sent it after 16 March 2020, which corresponds to the start of the first national lockdown in France due to the COVID-19 pandemic (as the collection of saliva samples during this period was interrupted). 3 Questionnaires were sent to index cases with a positive genetic test and to all invited relatives. 4 Number of questionnaires provided to participants before 31 July 2020. 5 Questionnaires completed and sent back to the coordinating centre by 31 July 2020.
Participants' Characteristics
Among the 4502 participating index cases, 4419 were women and 83 were men. Mean age at recruitment was 52.5 years (range: 19-91) for women and 65.1 years (range: 35-90) for men. Ninety-three percent of index cases had developed a first cancer prior to recruitment. Among female index cases, 3779 (85.5%) had breast cancer, 462 (10.5%) had ovarian cancer, 129 (2.9%) had cancer at another site, and 49 (1.1%) had no cancer. Among male index cases, 60 (72.2%) had breast cancer, 14 (17.0%) had prostate cancer, 8 (9.6%) had cancer at another site and 1 had no cancer (1.2%). For female index cases, we did not observe any difference in the mean age at diagnosis of first cancer according to the result of the TUMOSPEC panel analysis (46.9 years for carriers of a PPV vs. 47.4 years for noncarriers), nor between prospective (46.9 years) and retrospective cases (47.7 years) ( Table 2). In the prospective dataset, male index cases carrying a PPV were diagnosed at a younger age than noncarriers (54.4 years vs. 61.9 years). Male index cases in the retrospective dataset, who by design are PPV carriers, were even younger at cancer diagnosis (mean: 48.5 years), which may be attributable to a selection bias ( Table 2).
Identified Variants
In total, 133 LoF (26.8%), 349 missense variants (70.4%) and 14 in-frame indels (2.8%) were detected among the 456 prospective index cases with positive TUMOSPEC panel results (Figure 3a). A total of 40 index cases carried 2 eligible variants and no index cases carried 3 or more variants. A total of 62 index cases carried a variant classified as pathogenic in an actionable gene (sub-panel B) and 154 index cases carried a VUS in this group of genes, that is a Class 3 variant according to the 5-tier classification [11] (data not shown). The weighted distribution of variants per gene is shown on Figure 3b. The most frequently altered genes in the TUMOSPEC panel were ATM, CHEK2, PALB2, MSH6 and BRIP1.
Additionally, 82 eligible variants were detected in the 71 retrospective index cases (53 LoF, 28 missense and 1 indel). In this series, 41 variants were in an actionable gene, of which 26 were classified as pathogenic and 15 were classified as VUS. However, the distribution of variants in the retrospective series does not reflect the true distribution of variants in index cases of HBOC families, given that some genes were not part of the HBOC panels used by the laboratories prior to the implementation of the TUMOSPEC protocol, and some variant types were not flagged by the analytical pipelines implemented in the laboratories for routine genetic testing. Moreover, clinicians may have selected retrospective cases on the family phenotype or the deleteriousness of the variant.
Although BRCA1 and BRCA2 were not, per se, part of the TUMOSPEC panel, the two genes were tested in parallel with the investigated genes, which allowed us to assess the co-occurrence of BRCA1/2 pathogenic variants with TUMOSPEC eligible variants in index cases. We found that 32 out of the 456 (7.0%) prospective index cases carrying at least 1 eligible variant also carried a BRCA1/2 pathogenic variant ( Table 2). As expected, the frequency of BRCA1/2 pathogenic variants in the retrospective index cases was much lower than the one observed in the prospective series, as retrospective index cases with no BRCA1/2 variants were more likely to have been invited to participate in TUMOSPEC in an attempt to elucidate the familial predisposition (Table 2). Cancers 2021, 13, x FOR PEER REVIEW 12 of 19 Additionally, 82 eligible variants were detected in the 71 retrospective index cases (53 LoF, 28 missense and 1 indel). In this series, 41 variants were in an actionable gene, of which 26 were classified as pathogenic and 15 were classified as VUS. However, the distribution of variants in the retrospective series does not reflect the true distribution of variants in index cases of HBOC families, given that some genes were not part of the HBOC panels used by the laboratories prior to the implementation of the TUMOSPEC protocol, and some variant types were not flagged by the analytical pipelines implemented in the laboratories for routine genetic testing. Moreover, clinicians may have selected retrospective cases on the family phenotype or the deleteriousness of the variant.
Although BRCA1 and BRCA2 were not, per se, part of the TUMOSPEC panel, the two genes were tested in parallel with the investigated genes, which allowed us to assess the co-occurrence of BRCA1/2 pathogenic variants with TUMOSPEC eligible variants in index cases. We found that 32 out of the 456 (7.0%) prospective index cases carrying at least 1 eligible variant also carried a BRCA1/2 pathogenic variant ( Table 2). As expected, the frequency of BRCA1/2 pathogenic variants in the retrospective index cases was much lower than the one observed in the prospective series, as retrospective index cases with no BRCA1/2 variants were more likely to have been invited to participate in TUMOSPEC in an attempt to elucidate the familial predisposition ( Table 2).
The complete list of variants in the TUMOSPEC genes identified in the prospective and retrospective index cases of the feasibility study and their occurrence is provided in Table S3. It should be noted that the two variants NM_007194.4:c.1100del (p.Thr367fs; rs555607708) in CHEK2 [21,22] and NM_020937.4:c.5791C>T (p.Arg1931*; rs144567652) in FANCM [23][24][25] were under-reported (0 carrier of the FANCM variant and only 8 carriers The complete list of variants in the TUMOSPEC genes identified in the prospective and retrospective index cases of the feasibility study and their occurrence is provided in Table S3. It should be noted that the two variants NM_007194.4:c.1100del (p.Thr367fs; rs555607708) in CHEK2 [21,22] and NM_020937.4:c.5791C>T (p.Arg1931*; rs144567652) in FANCM [23][24][25] were under-reported (0 carrier of the FANCM variant and only 8 carriers of the CHEK2 variant). This is because their minor allele frequency exceeds 0.05% in populations of the 1000Genomes project [19] and in the Genome Aggregation Database (GnomAD) [20], and they were therefore initially not eligible for this study. However, due to their relevance in breast cancer susceptibility, an exception was made to include these two variants from October 2019.
Discussion
The identification of a genetic predisposition to cancer is now an integral part of the clinical care of patients and their relatives. It allows the implementation of prevention programs and screening when the risks are known. The effectiveness of genetic testing has been notably demonstrated for women carrying a BRCA1 or BRCA2 pathogenic variant and for whom prophylactic surgeries reduce mortality. However, current genetic testing does not provide significant assistance when no pathogenic variant is identified, that is, 85% of the cases of HBOC families enrolled in TUMOSPEC. New genetic tests are essential to support clinical decision making and to ensure improved outcomes in this situation. Current multigene panel tests often combine both diagnosis and research genes, but the genes sequenced for research purpose should be defined and patients informed before testing [30]. Note that the classification of a given gene as diagnosis or research might change in the coming years. In particular, STK11 is not currently an actionable gene for HBOC in France, unlike in the USA or other countries following the National Comprehensive Cancer Network guidelines [31]. So far, no STK11 pathogenic variants for Peutz-Jeghers syndrome have been identified in TUMOSPEC. However, should such variants be identified in the future and cascade testing be performed in the family, the invitation of family members should be handled by the clinical geneticist. Multigene panel sequencing will have the potential to improve germline risk assessment in HBOC families if: 1. classification of variants regarding their pathogenicity is accurate; 2. reliable estimates of the associated age-specific cancer risks can be obtained; and 3. a consensus is made on when to test for a given gene and how to manage a reported (likely) pathogenic variant [2,6]. However, for some genes, the cumulative cancer risk for carriers of a pathogenic variant may be found to be quite low, and testing such genes would not improve the surveillance of the patients. Conversely for other genes, carriers of a pathogenic variant may benefit from adapted surveillance and treatment.
The TUMOSPEC feasibility study included 4431 prospective index cases, and 19.1% of the prospective index cases with an available genetic result were found to carry at least one PPV in a gene on the investigated panel. Furthermore, 65.3% of the relatives of PPV carriers who were directly invited by the coordinating centre agreed to participate, with 50% of them returning their questionnaire and saliva sample in less than 1 month. On average, 5.7 relatives per positive family invited by the coordinating centre agreed to participate (i.e., 278 relatives from 48 families), while only 1.8 relatives per family (i.e., 28 relatives from 16 families) were enrolled by the clinician who counselled the family members following the identification of an actionable variant in the index case. This shows the efficiency of having family members invited by a coordinating centre in such a research program.
Qualitative feedback from clinicians and diagnostics laboratories teams regarding recruitment methods, information and sampling circuits, etc., as well as the communication between the coordinating centre, clinicians, laboratories and participants (particularly the efficiency of recruiting relatives), and the comprehension of documents (newsletter, consent forms, questionnaire, etc.) satisfied the evaluation criteria, therefore the study is being pursued. Overall, expanding the study to the analysis of 10,000 multigene panels within the next 3 years will identify~2000 families with a PPV in one of the 24 genes, with two to six family members genotyped for the familial variant, which will allow the refinement of cancer risk estimates for the most frequently altered genes.
The PPV rates according to gene and family phenotype will define our analysis strategy. For instance, we expect that we will rapidly gather sufficient families for genes such as ATM, CHEK2, PALB2, MSH6 and BRIP1, while PTEN, STK11, FANCM and XRCC2 families will be much rarer. For this latter group of genes, the data may be compiled with data from other countries where similar efforts have been initiated and/or in the framework of international consortia such as ENIGMA (https://enigmaconsortium.org/, accessed on 19 July 2021), BRIDGES (https://bridges-research.eu/project-bridges/, accessed on 19 July 2021) and COMPLEXO [32].
Our analytical strategy to assess cancer risks has been elaborated on the fact that the TUMOSPEC families are ascertained through family cancer clinics for the HBOC phenotype. Therefore, once we have recruited enough families who segregate a PPV for a given gene, we will use methods such as the genotype-restricted likelihood method and maximum likelihood parametric methods, which provide unbiased penetrance estimates irrespective of the criteria used for family selection [33] or other modified segregationanalysis approaches, such as MENDEL [34]. These methods use information available in families by calculating likelihood conditioned on the phenotypes of all family members (retrospective likelihood), also allowing for residual familial aggregation additional to the effect of PPVs. In order to capture the potential nature of the multiple cancers associated with some genes, we will also use competing risk models [35]. For some genes, the rarity of families who segregate an eligible variant may make the estimation of cumulative risks impossible in the TUMOSPEC sample (e.g., STK11 and PTEN). For those, we will estimate the relative risks by calculating incidence ratios to assess the differences in incidences between family members carrying a PPV and family members not carrying the variant, and will use a Cox proportional hazards model to estimate the hazard ratio (HR).
To define the tumor spectrum associated with alterations in each TUMOSPEC gene, we assume that the TUMOSPEC families are not selected because of the incidence of cancer at sites other than breast and ovary. Therefore, the incidence of these cancers in the recruited families can be studied by comparing it to that of the general population. The expected number of cancers per 5-year age category will be calculated from the French age-, sex-and period-specific estimated incidences. The standardized incidence ratio (SIR) of cancer associated with PPVs will be estimated from the ratio between the observed and the expected number of cases in the families. We will also calculate the relative risk weighed on the a priori probability of being a PPV carrier [36]. We will correct for bias by the selective testing of survivors and/or relatives affected by cancer among families, if any. Indeed, the over genotyping of cases may bias towards the null hypothesis within the categories of relatives with an unknown genotype [37].
All NGS data are now being centralized at the coordinating centre, and another short-term objective is to propose a consensus bioinformatics pipeline for future analyses of TUMOSPEC data. Indeed, currently, each diagnostics laboratory uses its in-house bioinformatics pipeline built for routine tests, and no standardization of the pipelines regarding basic quality control criteria, the version of the reference genome used for mapping or the annotation tools and databases (and version) was requested to characterize the variants. Moreover, the selection criteria for variants' eligibility are currently being discussed (MAF, in silico tools used to predict the deleteriousness of the variants). The MAF threshold of 0.05% to select eligible variants was a compromise between avoiding the inclusion of too many innocuous variants and not missing PPVs at conserved positions on the genome based on previous work conducted on some of the TUMOSPEC genes [38][39][40][41][42][43]. Some exceptions have been made for some recurrent variants in CHEK2 and in FANCM, and exceptions for other variants may be made in the future. Similarly, the choice of the prediction tools to assess the deleteriousness of the missense variants may not be optimal (CADD, Align-GVGD), and work is underway to assess the performance of other tools.
Conclusions
We have demonstrated the feasibility of a streamlined national study approach for achieving a large family sample with genetic, clinical and epidemiological data, representing an important resource for the study of cancer risks and the tumor spectrum associated with PPV in cancer susceptibility genes. The TUMOSPEC feasibility study involved nearly 4500 index cases recruited between September 2017 and December 2019 along with their relatives. This showed that the recruitment processes are well adapted to the clinical and laboratory constraints and that communication between the various partners (clinicians, biologists, coordinating centre and study participants) was smooth. Our planned larger study will amplify this resource and will allow us to gather a sufficient number of positive families for each investigated gene in a reasonable period of time. The final goal of this national effort is to improve the understanding of the cancer risk levels associated with the different types of rare variants for each gene and to provide appropriate clinical management guidelines. The knowledge, know-how and data that will emanate from the TUMOSPEC protocol will pave the way for future studies with extended gene panels or involving populations at a high risk of other cancer types. The same rapid discovery of new susceptibility genes is seen in all fields of cancer genetics, with the same lack of information. Hence, the framework of this protocol may rapidly be adapted for the study of other familial cancers.
|
2021-08-08T05:23:10.849Z
|
2021-07-21T00:00:00.000
|
{
"year": 2021,
"sha1": "8cd52a24b11b04acb835ae78c2552298ae93d13d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/15/3659/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cd52a24b11b04acb835ae78c2552298ae93d13d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
124439
|
pes2o/s2orc
|
v3-fos-license
|
Evidence for Vpr-dependent HIV-1 Replication in Human CD4+ CEM.NKR T-Cells
Background Vpr is exclusively expressed in primate lentiviruses and contributes to viral replication and disease progression in vivo. HIV-1 Vpr has two major activities in vitro: arrest of cell cycle in the G2 phase (G2 arrest), and enhancement of viral replication in macrophages. Previously, we reported a potent HIV-1 restriction in the human CD4+ CEM.NKR (NKR) T cells, where wild-type (WT) HIV-1 replication was inhibited by almost 1,000-fold. From the parental NKR cells, we isolated eight clones by limiting dilution. These clones showed three levels of resistance to the WT HIV-1 infection: non-permissive (NP), semi-permissive (SP), and permissive (P). Here, we compared the replication of WT, Vif-defective, Vpr-defective, and Vpu-defective viruses in these cells. Results Although both WT and Vpu-defective viruses could replicate in the permissive and semi-permissive clones, the replication of Vif-defective and Vpr-defective viruses was completely restricted. The expression of APOBEC3G (A3G) cytidine deaminase in NKR cells explains why Vif, but not Vpr, was required for HIV-1 replication. When the Vpr-defective virus life cycle was compared with the WT virus life cycle in the semi-permissive cells, it was found that the Vpr-defective virus could enter the cell and produce virions containing properly processed Gag and Env proteins, but these virions showed much less efficiency for reverse transcription during the next-round of infection. In addition, although viral replication was restricted in the non-permissive cells, treatment with arsenic trioxide (As2O3) could completely restore WT, but not Vpr-defective virus replication. Moreover, disruption of Vpr binding to its cofactor DCAF1 and/or induction of G2 arrest activity did not disrupt the Vpr activity in enhancing HIV-1 replication in NKR cells. Conclusions These results demonstrate that HIV-1 replication in NKR cells is Vpr-dependent. Vpr promotes HIV-1 replication from the 2nd cycle likely by overcoming a block at early stage of viral replication; and this activity does not require DCAF1 and G2 arrest. Further studies of this mechanism should provide new understanding of Vpr function in the HIV-1 life cycle.
Background
The vpr gene is highly conserved in the primate lentiviruses, which include HIV-1, HIV-2, and SIV (reviewed in [1]). HIV-2 and some SIV strains additionally express vpx, a vpr paralog acquired by gene duplication or nonhomologous recombination with an ancestral vpr gene [2,3]. Both Vpr and Vpx proteins are incorporated into nascent virions at a high copy number via an interaction with Gag and consequently are present in the cytoplasm of the target cells [4][5][6][7][8], indicating that they play a role in the early stage of viral infection. In fact, inactivated vpr genes quickly revert back to the active form after infecting a human subject, chimpanzees, and rhesus monkeys, indicating that vpr is under strong positive selection [9,10]; vpr mutations are frequently found in HIV-1 patients with slow disease progression [11][12][13][14]; vpr/vpx double-deletion mutation markedly attenuates SIV replication in rhesus monkeys [15,16]; vpx singledeletion mutation significantly attenuates SIV replication in pig-tailed monkeys [17,18]. These results suggest that vpr and vpx are very important for viral replication and disease progression in vivo.
HIV-1 Vpr exhibits two major activities in vitro: induction of G2 arrest and enhancement of viral replication in monocyte-derived macrophages (MDMs) (reviewed in [19,20]). Vpx does not induce G2 arrest, but it enhances viral replication in both MDMs and monocyte-derived dendritic cells (MDDCs) [21]. More importantly, Vpx enhances HIV-1 replication in trans in these myeloid cells [22,23]. The mechanism of Vprinduced G2 arrest has been thoroughly studied. Vpr hijacks a host DNA-damage-response (DDR) pathway to trigger G2 arrest by activating the DNA damage sensor ATR but not ATM [24]. In particular, Vpr binds to the DDB1-Cul4A-associated-factor-1 (DCAF1) protein, which is recognized by the Cullin (Cul) 4A E3 ligase consisting of Cul4A, RING H2 finger protein homolog (RBX1), and DNA damage-binding protein 1 (DDB1) (reviewed in [25]). It is currently considered that Vpr triggers proteasomal degradation of an as-yet-unknown cell cycle regulator, resulting in ATR-activation and G2 arrest [25]. The ATR-activation by Vpr also triggers apoptosis [24] and the up-regulation of cell surface protein ULBP2 [26], which is a ligand for the natural killer (NK) cell activation receptor NKG2D. Together, all these downstream events may induce killing of infected cells and contribute to viral pathogenesis in vivo.
Although both Vpr and Vpx enhance HIV-1 replication in MDMs, their levels of enhancement are different, and different mechanisms are involved. While initial experiments showed that Vpr could only enhance HIV-1 replication by 2-to 5 -fold (reviewed in [27]), the activity of Vpx could enhance replication by about 100-fold [28][29][30]. Vpr has several other activities in cell culture, including activation of HIV-1 long-terminal-repeat (LTR), increase of viral reverse transcription fidelity, and promotion of viral DNA nuclear import [31]. Although all these activities could benefit viral replication, Vpr-enhanced nuclear import seems to be more relevant for the viral replication enhancement [27]. Vpx also enhances viral nuclear import, but it promotes viral replication through DCAF1 by overcoming a restriction factor that blocks viral reverse transcription [29,30]. Recently, SAMHD1 was identified as a myeloid cell-specific HIV restriction factor, which is counteracted by Vpx [32,33].
It has been generally considered that Vpr and Vpx do not promote viral replication in primary or immortalized CD4 + T-cells. Nonetheless, several groups have reported some levels of viral promotion: Vpr increases HIV-1 replication in human peripheral mononuclear cells (PBMCs) or purified primary CD4 + T-cells by 2-to 6fold [34][35][36]; Vpr from a SIV strain that does not encode Vpx enhances SIV replication in PBMCs by more than 10-fold [37]; Vpr and Vpx jointly enhance HIV-2 or SIV replication in the human 174xCEM cell line or a simian T-cell line by 10-fold [38,39]; Vpx alone enhances HIV-2 or SIV replication in PBMCs by more than 10-fold [40,41]. These results strongly argue that Vpr should play a positive role in HIV-1 infection of CD4 + T-cells during natural infection. Because CD4 + T cells are the primary targets for HIV-1 replication and the loss of these lymphocytic cells is responsible for immunodeficiency, we investigated how Vpr affects HIV-1 replication in these cells. Our efforts result in the identification of human CD4 + T-cells where HIV-1 replication is completely dependent on Vpr. These results suggest an important Vpr function in HIV-1 replication, which was not appreciated before.
Results
Vpr is required for HIV-1 replication in the permissive and semi-permissive NKR clones Previously, we reported a potent HIV-1 restriction in the human CD4 + CEM.NKR (NKR) T cells [42]. NKR cells express both CD4 and CXCR4, but their viral production is typically 100-fold to 1000-fold lower than other human T cells. However, we also found that although the original NKR cells were clonally derived, they contained heterogeneous populations that exhibit different levels of HIV-1 resistance due to unknown variability. From the original NKR cells, we isolated eight NKR subclones that showed three levels of HIV-1 resistance: four clones (N1, N2, N3, N6) were completely nonpermissive (NP); two (N7, N8) were semi-permissive (SP); two (N4, N5) were highly permissive (P) [43]. As can be seen, the viral production from N1-NP, N2-NP and the original NKR cells was~1,000-fold lower than production from N5-P, and~100-fold lower than production from N8-SP and the original CEM cells ( Figure 1A). All these cells grew similarly (data not shown), indicating that the differences in viral production should not result from the differences in cell division.
Because HIV-1 could replicate in N5-P and N8-SP cells, we infected them with WT, Vif-defective (ΔVif ), Vpr-defective (ΔVpr), or Vpu-defective (ΔVpu) HIV-1; and as controls, the human CD4 + T-cell line H9 and another CEM-derived cell line CEM-SS (SS) were also infected. As expected, it was found that in SS cells, all four viruses replicated equally well ( Figure 1B); in H9 cells, only the ΔVif virus did not replicate due to A3G expression [44] ( Figure 1C). In N5-P and N8-SP cells, both the WT and ΔVpu viruses replicated well; the ΔVif virus failed completely to replicate in N5-P and N8-SP cells ( Figure 1D, Figure 1E). It was not surprising that the ΔVif virus did not grow, because these cells also expressed A3G [43]. However, it was very surprising that the ΔVpr virus replicated very slowly in the N5-P cells, and like the ΔVif virus, it failed completely to replicate in the N8-SP cells ( Figure 1D, Figure 1E). Because SAMHD1 was recently identified as a Vpx-sensitive restriction factor and because its expression was not limited to the myeloid tissues [32,33], we wondered whether SAMHD1 played a role in these cells. However, we could not detect SAMHD1 expression in NKR cells and the other clones, but it was detected in THP1 cells ( Figure 1F). These results demonstrated that Vpr was required for HIV-1 replication in the permissive and semi-permissive NKR clones and that it did not target SAMHD1.
Vpr is required for the 2nd round of infection
We then determined how Vpr promoted viral replication in N8-SP and N5-P cells. First, we determined whether Vpr was required for the 1st round of infection. HIV-1 luciferase (Luc) reporter viruses that only replicated one cycle were produced from 293T cells in the presence or absence of Vpr; their infectivity was measured in N8-SP, N5-P, and SS cells (Figure 2A). It was found that both Vpr(+) and Vpr(−) viruses produced similar levels of luciferase activity in these cells ( Figure 2B), and viral production from these cells in the presence or absence of Vpr was also quite similar ( Figure 2C). These results suggested that Vpr was not required during the 1st round of viral replication.
Second, we determined whether Vpr was required for the 2nd round of infection. N8-SP, N5-P, and SS cells were infected with WT or ΔVpr virus. Newly produced viruses were collected, and their infectivity was measured by infection of the HIV indicator TZM-b1 cells ( Figure 2D). It was found that Vpr did not increase HIV-1 infectivity in SS cells, but it increased infectivity significantly in N8-SP cells and less significantly in N5-P cells ( Figure 2E). These results suggested that Vpr is required for the 2nd round of viral replication.
Vpr enhances an early stage of viral replication at the 2nd round of infection We next investigated how Vpr enhanced viral replication during the 2nd round of infection. Since Vpr was expressed in the producer cells in the previous experiment, we expressed Vpr in the target cells, and tested whether it could rescue the ΔVpr virus replication. TZM-b1 cells were transfected with a pcDNA3.1 vector expressing codon-optimized Vpr gene or an empty vector, and these cells were infected with ΔVpr HIV-1 produced from N8-SP, N5-P, and SS cells ( Figure 3A). The expression of Vpr in TZM-b1 cells was clearly detected ( Figure 3B). However, even in the presence of Vpr, the ΔVpr HIV-1 infectivity did not increase in the target cells ( Figure 3C). This result suggested that Vpr should be expressed from the producer cells to rescue the 2nd round of HIV-1 replication in NKR cells.
Next, we determined whether the N8-SP and N5-P cells produced defective particles in the absence of Vpr, which would reduce viral infectivity during the 2nd cycle of viral replication. N8-SP, N5-P, and another CEMderived CEM-T4 (T4) cells were infected with WT or ΔVpr virus; virions were purified from culture supernatants by ultracentrifugation; Gag, Env, and Vpr expression in cells and virions was determined by Western blotting. We confirmed that HIV-1 replication in T4 cells did not require Vpr (data not shown). It was found that similar levels of processed Gag (p24) and Env (gp120, gp41) were detected in both infected cells and virions regardless of Vpr expression ( Figure 3D). These results suggested that these cells did not produce structurally defective virions in the absence of Vpr, and neither did Vpr affect Gag, Pol, and Env expression. Lastly, we analyzed the early stage of viral replication during the 2nd cycle of infection. T4 cells were infected with WT or ΔVpr virus purified from HIV-1-infected N5-P, N8-SP, or T4 cells; twelve hours later, viral early and late reverse transcription (RT) products were chased by real-time PCR. It was found that the ΔVpr virus from T4 cells generated slightly more early and late viral RT products than the WT virus ( Figure 3E). In contrast, the ΔVpr virus from N5-P cells generated similar levels of both RT products as the WT virus, whereas the ΔVpr virus from N8-SP cells generated 3-to 8-fold less of both RT products than the WT virus. These results suggested that Vpr should enhance an early stage of viral replication during the 2nd cycle infection of these cells.
Vpr is also required for HIV-1 replication in the nonpermissive NKR clones Furthermore, we determined whether Vpr was required in the non-permissive clones as well as the parental NKR cells. Because these cells were highly refractory to HIV-1 infection, we first established a method to restore viral replication. It was reported that arsenic could increase HIV-1 replication in human cells, although the mechanism for this activity remains unclear [45,46]. We tested arsenic activity in these NKR cells by treatment with As 2 O 3 . As 2 O 3 is very toxic to human T cells because of its ability to induce apoptosis [47,48]; so a very low concentration (0.2 μM) was used. Surprisingly, a completely recovery of WT HIV-1 replication was found in NKR, N1-NP, and N2-NP cells after this treatment ( Figure 4, top panels). In contrast, the same treatment did not apparently affect WT HIV-1 replication in the H9 cells. We then compared ΔVif, ΔVpr, and ΔVpu virus replication in these treated cells. In H9 cells, the ΔVif virus was the only one that did not grow, and the As 2 O 3 treatment had little influence on the replication of these three viruses ( Figure 4). In NKR, N1-NP, and N2-NP cells, only the ΔVpu virus grew well in the presence of the As 2 O 3 treatment; even under such treatment, both ΔVif and ΔVpr viruses did not grow ( Figure 4). Thus, Vpr was also required for HIV-1 replication in the parental NKR cells and the non-permissive clones.
To understand whether the Vpr-dependent HIV-1 replication was affected by viral tropism, we created a N2-NP cell line expressing human CCR5 (N2-R5). A similar cell line from SS was also created (SS-R5) to be the control. These cells were infected with R5-tropic WT or ΔVpr HIV-1 strain NL-AD8, and viral replication was determined. It was found that both WT and ΔVpr NL-AD8 viruses replicated well in the SS-R5 cells, but they failed completely to grow in the N2-R5 cells ( Figure 5A). When the N2-R5 cells were treated with As 2 O 3 , the WT virus started to replicate, but the ΔVpr virus did not ( Figure 5A). Thus, the Vpr-dependency was not influenced by viral tropism.
To understand whether the enhancement of HIV-1 replication by As 2 O 3 was due to speeding up cell division, we compared the growth of NKR, N2-NP, and T4 cells in the presence or absence of 0.2 μM As 2 O 3 treatment. It was found that As 2 O 3 slightly reduced their growth rate, indicating that it might have some toxic effect at this concentration ( Figure 5B). We also tried another less-toxic arsenic compound (NaAsO 2 ). NKR and N2-NP cells were infected with WT or ΔVpr HIV-1 in the presence of 2 μM NaAsO 2 and viral replication was measured. It was found that like the As 2 O 3 treatment, the NaAsO 2 treatment also selectively increased the WT, but not the ΔVpr HIV-1 replication ( Figure 5C). These results suggested that the effect of arsenic in these cells was specific.
DCAF1 and G2 arrest are not required for Vpr enhancement of viral replication
As introduced earlier, Vpr interacts with DCAF1 to induce G2 arrest; although Vpx does not cause G2 arrest, it interacts with DCAF1 to neutralize SAMHD1. We wondered whether Vpr enhancement of viral replication required DCAF1 and/or G2 arrest. We introduced two well-characterized mutations (Q65R, R80A) into the vpr gene in the proviral clone pNL4-3. The Vpr Q65R mutant does not bind to DCAF1 and therefore does not induce G2 arrest [49]; although the R80A mutant binds to DCAF1, it does not induce G2 arrest [26,50].
First, we compared the expression and activity of these Vpr proteins. The WT, ΔVpr, Q65R, and R80A viruses were produced by transfecting 293T cells with these proviral constructs. The Vpr expression in 293T cells was determined by Western blotting; the G2 arrest activity was determined by infection of SS and NKR with these viruses. It was found that both Vpr Q65R and R80A mutants were expressed at similar levels as the WT protein ( Figure 6A). In addition, the G2/G1 ratio in ΔVpr and WT virus-infected cells shifted from 0.33 to 1.11 in the parental NKR cells and from 0.65 to 1.62 in SS cells, indicating that Vpr induced strong G2 arrest in these cells ( Figure 6B). The background levels of G2 arrest in the ΔVpr virus-infected cells were likely caused by Vif, which also has similar activity [51]. Compared to the WT virus, the levels of G2 arrest by the Q65R mutant in SS and NKR cells were significantly reduced, and these levels were further reduced in the R80A mutant virus-infected cells ( Figure 6B). These results were consistent with previous observations made by other investigators [26,49,50].
Second, we measured viral replication in SS, N8-SP, N2-NP, and the parental NKR cells. Because the N2-NP and parental NKR cells were highly non-permissive, these cells were treated with As 2 O 3 during infection. It was found that in the SS cells, all these viruses replicated equally well; in the N8-SP and arsenic-treated N2-NP and NKR cells, only the ΔVpr virus replicated poorly, whereas the WT, Q65R, and R80A viruses replicated almost equally well ( Figure 6C). These results suggested that both G2 arrest and DCAF1-binding should not be required for Vpr enhancement of viral replication in NKR cells.
Discussion
In this report, we present compelling evidence to demonstrate that Vpr strongly enhances HIV-1 replication in the human CD4 + NKR T-cells. The ΔVpr virus only produces baseline levels of virions in the semi-permissive clone N8-NP, non-permissive clones N1-NP and N2-NP, and parental NKR cells (~1 to 10 ng/ml p24 Gag ), whereas the WT virus produces very high levels (100-1000 ng/ml p24 Gag ) (Figure 1 and Figure 4). These results suggest that Vpr could enhance HIV-1 replication by 100-to 1000-fold in these cells, which is a much greater effect than the previously reported Vpr effect in macrophages and other CD4 + T-cells. In fact, this Vpr effect is at the same level as that seen with Vif, highlighting its important role in viral infection of CD4 + T-cells. The mechanism of Vpr enhanced viral replication reported here is different from what was reported before. As introduced earlier, Vpr was shown to facilitate nuclear import of viral DNA in macrophages, which was considered as a major mechanism for HIV-1 replication enhancement [27]. We found that Vpr was not required for viral replication during the 1st cycle of viral replication in NKR cells ( Figure 2B, Figure 2C), indicating that it did not promote viral replication at this step. Indeed, Vpr was required for the 2nd cycle of viral replication ( Figure 2E). Vpr did not affect Gag and Env expression and processing, and it also had no effect on Env packaging ( Figure 3D), indicating that it should not play a role in viral entry. Nevertheless, the ΔVpr virus exhibited poor efficiency in conducting reverse transcription, indicating that viral replication should be blocked at an early stage after entering into the cell. Using the parental NKR cells, our previous work suggested that NKR cells should express a dominant factor that inhibited the WT virus replication from the 2nd cycle [42]. Since arsenic could completely restore the WT viral replication, this inhibitor was likely disrupted by arsenic. This arsenicsensitive inhibitor should not exist in the permissive and semi-permissive clones, because the WT HIV-1 replicated well in these cells without arsenic treatment. Nevertheless, all NKR cells, with a possible exception for the permissive clone N5-P, should express another Vprsensitive inhibitor at high levels, because viral replication in these cells was Vpr-dependent. Our results suggest that this unknown factor should be packaged into virions and block an early stage of viral replication at the 2nd round of infection, because we found that Vpr was required to be expressed in the viral producer cells and rescued viral reverse transcription in the target cells ( Figure 2E, Figure 3C, Figure 3E). Recently, a genome-wide siRNA screening identified 52 new host factors that could inhibit HIV-1 replication at the early stages of viral life cycle [52]. It will be interesting to know how they are expressed in NKR cells and whether they are targeted by Vpr. Alternatively, Vpr may also recruit a positive cellular factor from the viral producer cells into HIV-1 virions that promotes the early stage of viral replication, although we showed before that NKR cells should not be deficient in host factors to support HIV-1 replication [42].
To understand how Vpr counteracted this factor, we tried to disrupt Vpr and DCAF1 interaction by introducing the Q65R mutation, and we found that this mutant was still able to enhance viral replication ( Figure 6C). We also made the R80A mutant that still binds DCAF1 but does not induce G2 arrest according to the literatures, and we found that it was also capable of enhancing viral replication ( Figure 6C). These results strongly suggest that DCAF1 and G2 arrest are not required for this Vpr activity. As introduced earlier, Vpx neutralizes the restriction factor SAMHD1, and this activity is DCAF1-dependent [32]. However, there is another unknown HIV-1 restriction factor in MDDCs, which is neutralized by Vpx in a DCAF1-indepednent manner [53]. In addition, it has been reported that Vpr could enhance HIV-1 replication in the human Hut78 T-cell line and this activity was independent of DCAF1 [36]. Thus, the Cul4A E3 ligase is not always required for Vpr and Vpx activity.
Conclusion
We have identified human CD4 + T-cells where HIV-1 replication is completely dependent on Vpr. Vpr promotes HIV-1 replication in NKR cells from the 2nd round of infection, likely by overcoming an early block; and its activity does not require DCAF1 and G2 arrest. We suggest that further study of the Vpr activity in NKR cells will provide new understanding of Vpr function in the HIV-1 life cycle and uncover a novel anti-retroviral mechanism.
To stably express human CCR5 in N2 and CEM-SS cell lines, recombinant retrovirus expressing human CCR5 was produced by transfection of 293T with retroviral vector pBABE.CCR5, packaging vector pCgp, and VSV-G expression vector. Cells were then infected with the virus and stable cell lines were selected by puromycin (0.5 μg/ml).
Virus production
HIV-1 virions were produced from 293T cells by the standard calcium phosphate transfection. Typically, 20 μg proviral DNA were used to transfect 293T cells cultured in a 100-mm dish with 40% confluence, and viruses were collected from the supernatants after 48 hours. Viral production was measured by p24 Gag ELISA.
HIV-1 infection of human T cell lines
A total of 2 × 10 5 cells were incubated with equal amounts of virus at 37°C for three hours. After removal of the inocula and washing three times, cells were cultured in 24-well plates for 16 days. Culture supernatants were then collected and replaced with new medium every other day, and viral production was measured by p24 Gag ELISA. For spinoculation, cells were placed in a 48-well plate with the virus and centrifuged at 1,200 × g, 25°C, for 2 hours. Cells were washed and viral production was determined similarly. After two days, viruses were harvested from supernatants and purified by ultracentrifugation at 222,000 x g, 4°C, for 30 min. Virions were then collected for Western blot analysis.
Real-time PCR analysis of viral cDNAs
A total amount of 200 ng virions purified from N5-P, N8-SP, and T4 cells after spinoculation were inoculated into 2 × 10 6 T4 cells at 37°C for 2 hours. Cells were then washed with phosphate-buffered saline (PBS) and cultured for additional 12 hours. The total cellular DNAs were extracted from these cells by the DNeasy kit (Qiagen), and purified DNAs were further treated with DpnI at 37°C for 1 hour to remove any plasmid DNA contamination. Equal amounts of cellular DNAs were used for real-time PCR using TaqMan W Universal PCR Master Mix kit (Applied Biosystems). The early reverse transcripts (strong stop) were amplified by primers oHC64/ oHC65 and quantitated by a fluorescence labeled probe oHC66; the late reverse transcripts were amplified by MH531/MH532 and quantitated by LRT-P; mitochondrial DNA were amplified by MH533/MH 534 and quantitated by mito-probe [56]. Reactions were performed in triplicate. After initial incubation at 95°C for 10 minutes, 40 cycles of amplification were carried out for 15 sec at 95°C followed by 1 minute at 60°C. Reactions were analyzed using a 7900HT system (Applied Biosystems). Finally, relative HIV-1 cDNA copies were calculated by normalization to the levels of mitochondrial DNA.
Cell cycle analysis 1 × 10 6 cells were infected with HIV-1 with or without Vpr mutations, respectively. Two days later, cells were harvested and washed once with cold PBS. Washed cells were resuspended in 1 ml of cold PBS, and then slowly added into 9 ml of ice-cold 70% ethanol with gently vortexing. Ethanol-fixed cells were left overnight at −20°C.
The following day, cells were centrifuged at 500 × g to remove ethanol and cells were washed with cold PBS containing 0.1% Titon X-100 (PBS-T). Cells were then incubated with 30 μl cold PBS-T containing 1 μl of anti-Gag antibody (183-H12-5C) for 30 minutes. After washing two times with PBS-T, cells were incubated with 30 μl cold PBS-T containing 1 μl of FITC-conjugated antimouse immunoglobulin antibody for another 30 minutes. After further washing, cells were resuspended in PBS-T staining buffer containing 20 μg/ml propidium iodide and 200 μg/ml RNase, and allowed to incubate for two hours on ice. Cell cycle profiles were analyzed by flow cytometry and results were analyzed by FlowJo to derive percentages of cells in different phases of cell cycle.
Western blotting
The anti-SAMHD1 antibody was perchased from Proteintech Group. HIV-1 viral proteins were detected by antibodies from NIH AIDS Research and Reference Reagent Program and their catalogue numbers are: 1513 (Gag), 526 (gp41), 521 (gp120), and 11836 (Vpr). Horseradish peroxidase (HRP)-conjugated anti-goat, rabbit, or mouse immunoglobulin G secondary antibodies were purchased from Pierce. Detection of the HRP-conjugated antibody was performed using an enhanced chemiluminescence detection kit (Amersham Bioscience).
|
2017-06-23T19:37:53.629Z
|
2012-11-07T00:00:00.000
|
{
"year": 2012,
"sha1": "e13a6444ee27431e03d0b566b27e8681beb1c457",
"oa_license": "CCBY",
"oa_url": "https://retrovirology.biomedcentral.com/track/pdf/10.1186/1742-4690-9-93",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e13a6444ee27431e03d0b566b27e8681beb1c457",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
76662640
|
pes2o/s2orc
|
v3-fos-license
|
Numerical solution of a general interval quadratic programming model for portfolio selection
Based on the Markowitz mean variance model, this paper discusses the portfolio selection problem in an uncertain environment. To construct a more realistic and optimized model, in this paper, a new general interval quadratic programming model for portfolio selection is established by introducing the linear transaction costs and liquidity of the securities market. Regarding the estimation for the new model, we propose an effective numerical solution method based on the Lagrange theorem and duality theory, which can obtain the effective upper and lower bounds of the objective function of the model. In addition, the proposed method is illustrated with two examples, and the results show that the proposed method is better and more feasible than the commonly used portfolio selection method.
Introduction
With the mature development of the securities market, in the last decade, studies have paid increasing attention to the theory of portfolio selection.The first quantitative mean variance model for portfolio selection was developed by Markowitz [1], which considers the expected return and variance to be crisp numbers and seeks a balance between two objectives: maximizing the expected return and minimizing the risk in the portfolio selection.Since the 1950s, the quantitative methods for portfolio selection have been dramatically developed in both theories and applications.The deterministic portfolio model that Markowitz developed has been further extended by numerous scholars [2][3][4][5][6][7][8].In these extended portfolio selection models, the coefficients in the objective function and constraint function are always determined as crisp values.However, because of the national economic situation, policy changes, investor psychology and many other factors, the securities market has a strong uncertainty, which causes the dynamic expected returns, risk loss rate and liquidity of the securities market [9].Moreover, the uncertainties increase the risk of decision-making on portfolio selection for investors.There are two popular approaches to address such uncertainties: (i) fuzzy programming and (ii) interval programming.Since the future returns of each securities cannot be correctly reflected by the historical data, particularly in an uncertain environment, investors can use the fuzzy set to estimate the vagueness of security returns and risk for the future [10][11][12][13][14][15], which is a good method to address the portfolio selection.The fuzzy programming treats the uncertain quantities as a fuzzy set with certain membership functions.Thus, the decision maker must have precise knowledge of the grade of membership function, which is not easy to obtain from the limited data that the decision maker often has in practice.In fact, another method to address the uncertainty in the portfolio selection problem assumes that the data are not well defined but can vary in given intervals [16].Hence, interval programming is appropriate to handle the imprecise input data.The existing literatures indicate that interval programming has become a popular topic in the research of portfolio selection because it can enrich the theory of optimization and provide the solution of the problem more practical significance.
At present, the interval programming of portfolio selection is mostly based on the linear format, which is relatively simple compared with non-linear programming.Interval linear programming problems have been explored in several studies on models and estimation methods [17][18][19][20][21][22].Then, it has been extensively applied to portfolio selection studies.Based on the interval order relation, Lai et al. (2002) and Lu et al. (2004) proposed an interval programming portfolio selection by quantifying the covariance and expected return as intervals, respectively [23][24].The difference is that the latter introduces a risk preference coefficient.In solving the multi-objective and multi-period interval portfolio selection optimization model, Giove et al. (2006) proposed the use of a minimax regret approach based on a regret function, and Liu (2013) designed an improved particle swarm optimization algorithm for solution, both of which are used to solve the linear objective function of the interval portfolio model [25][26].Bhattacharyya et al. (2011) proposed three different mean-variance-skewness models with interval numbers to extend the classical mean-variance portfolio selection model by defining the future financial market optimistically, pessimistically, and weightedly combined ways [27].Inspired and motivated by [28], Wu et al. (2013) proposed an interval portfolio model, where both expected returns and risk can vary in estimated intervals [29].In other words, the solution methods to solve the interval linear programming model for portfolio selection have been widely explored.However, to the best of our knowledge, there are few methods to solve the interval quadratic programming model for portfolio selection with interval coefficients of the objective function and its constraints.
Theoretically, robust optimization is also an effective tool for dealing with parameter uncertainty models, and has received extensive attention in the fields of natural sciences, engineering sciences, and economic management.Compared with interval optimization, the robust optimization theory considers the worst case of all possible values, and its optimization result is more conservative than the interval theory.For investors with high security requirements or conservative investment strategies, portfolio strategy based on robust optimization theory is a good choice.However, when using this theory to analyze the problem, if the number of uncertain parameters increases, the number of elements in the scene will also show an exponential growth trend, which makes the established optimization model difficult to solve [30][31].However, combined with the existing literatures, it is more suitable to use the interval optimization to find the optimal solution of the objective function for the interval quadratic programming portfolio model proposed in this paper [32].
To solve the interval quadratic programming problem, Liu and Wang (2007) developed an algorithm for the interval quadratic programming with constraints, which contained interval numbers [33].Later, Li and Tian (2008) extended Liu and Wang's method and developed a new algorithm to optimize the upper bounds of the coefficients in the general interval quadratic programming problem with all coefficients in the objective function, and its constraints are interval numbers [34].Jiang et al. (2008) conducted a non-linear interval programming method that transformed the uncertain optimization problem into a deterministic two-objective optimization problem to seek the algorithmic solutions [35].Li et al. (2016) developed a simple and effective method to check the zero dual gaps and discussed some relations between the upper and optimal values of the two modes to estimate the optimal value of the fundamental problem of interval quadratic programming [36].However, there is little research on the portfolio selection problem using interval quadratic programming.Xu et al. proposed an interval quadratic programming model that assumed that there are no short sales and introduced the acceptability and possibility degree of interval number to transform the uncertainty model into a deterministic model [32,[37][38].Based on a partial-order relation in the set of intervals, Kuamr et al. (2013) developed a method to determine an acceptable optimal feasible solution to solve the generalized interval quadratic programming model, and applied to the securities portfolio selection [39].
Considering transaction cost, borrowing constraint and threshold constraint, Zhang et al. (2016) proposed a multi-stage mean-semi-variance portfolio model with minimum transaction volume constraint [40].Compared with the existing multi-stage portfolio, the decision variable of the multi-stage portfolio is an integer, which is consistent with the real portfolio.Zhou et al.
(2015) constructed a multi-stage portfolio optimization model considering transaction costs.Based on the real frontier, the efficiency of portfolio was defined and the corresponding nonlinear model was proposed to solve the problem [41].Although Zhang and Zhou considered the transaction volume and transaction cost, it studied the portfolio model of securities under deterministic conditions.However, the various uncertainties in the securities market made it difficult for investors to give accurate values for the yield and risk of securities.Instead, investors were more likely to obtain the range of variation of these uncertain parameters, that is, the number of intervals, so research Investment portfolios and risks were more meaningful for portfolio models with interval numbers.Although Xu et al. (2015) and Kuamr et al. (2013) studied the interval quadratic programming model of securities investment, they did not consider the effects of transaction costs and market liquidity, the results of their proposed models were not sufficiently optimized [32,39].To construct a more optimized model, a general interval quadratic programming model for portfolio selection based on Xu et al. (2012Xu et al. ( , 2013) ) and Kuamr et al. (2013) must be investigated.In this paper, we develop a new general interval quadratic programming model for portfolio selection by introducing the linear transaction costs and liquidity of the securities market, which makes the model more optimized and closer to the actual situation.To solve the general interval quadratic programming, a new solution approach to the problem is proposed based on the Lagrange dual algorithm.Based on the duality method, a more accurate value can be obtained when solving the upper bound of the general interval quadratic programming.
This paper is organized as follows.First, Section 2 reviews some preliminary knowledge about interval numbers.In Section 3, a new general interval quadratic programming model for portfolio selection and a numerical method based on the Lagrange theorem and duality theory are proposed.Then, we present two numerical examples to illustrate the potential applications of the new models and compare two methods of the model in Section 4. Finally, the concluding remarks and future research directions are provided in Section 5.
Theory of interval numbers
(1) Definition of the interval number and interval matrix Definition 2.1 Let ã ¼ ½a; � a� be a bounded closed interval; a � � a and a; � a 2 R. We also regard the interval as a number represented by its endpoints a and � a.We call ã ¼ ½a; � a� the interval number.If a ¼ � a, then ã is reduced to a real number.
tive semidefinite, then we call the interval matrix à ¼ ðã ij Þ n�n a symmetric positive semidefinite.
(2) Operation of the interval number Let ã ¼ ½a; � a� and b ¼ ½b; � b� be two interval numbers and let k2R be a real number.Thus, ( For more details on theory of interval numbers, see [42].
Model and solution
Liu et al. (2015) showed that ignoring transaction costs often leads to invalid portfolio references, so this article introduces the concept of transaction costs [43].Suppose the investor purchases the risk securities x i (i = 1,2,. ..,n) to pay the transaction fee, the rate is c i , and the purchase amount does not exceed the given value u i , the transaction fee is calculated according to u i , then the transaction cost function is defined as follows When considering the transaction cost, we may set the transaction cost function C i (x i ) as a linear function.This paper introduces the linear transaction costs and liquidity (Following the idea of [9] and [44], this paper suggests using the turnover rate to measure market liquidity) as constraint conditions into the model and uses interval numbers to describe the rate of return, risk loss rate and liquidity of the securities.Suppose there are n types of securities for investors to select.Based on the mean-variance model, the investors intend to minimize the risk of the portfolio . We establish a new general interval quadratic programming model for portfolio selection as follows: where c i is the transaction cost rate of security i,x i is the proportion of security i, ri is the return --of security i.Ri and li denote the expected return and the turnover rate of security i, respectively.Q ¼ ðq ij Þ n�n ; i; j ¼ 1; 2 � � � ; n the covariance matrix of the return vector, where we assume that Q is semi-definite.Because ri , qij , and li are uncertain, we treat them as interval numbers, i.e., ri x 2 ,. ..,x n ) T in model ( 1), we obtain a portfolio of securities.
To solve the interval quadratic programming, most studies first consider how to convert it into a deterministic model and design an algorithm [32,45].Yao et al. ( 2016) conducted a multi-period mean-variance portfolio selection problem with a stochastic interest rate using the dynamic programming approach and Lagrange duality theory [46].However, they only considered the expected return and risk in their multi-period mean-variance portfolio selection and did not account for the effects of transaction costs and market liquidity, which makes the result not optimal.This paper focuses on the Lagrange dual algorithm to solve the general interval quadratic programming model for portfolio selection.Based on the duality method, a more accurate value can be obtained when solving for the upper bound of the general interval quadratic programming.Thus, based on the risk range of the portfolio, the investors can select a more reasonable investment plan in an uncertain market environment.
To validate the Lagrange dual method, this paper also uses the common portfolio selection method to solve the general interval quadratic programming model [47].First, in sections 3.1 and 3.2, this paper proposes a new method based on the Lagrange dual algorithm.Second, conventional method is shown in Section 3.3.Finally, the two methods are compared by experiments.
Decomposition of the model
The objective function and constraint coefficients of model (1) f ðxÞ ¼ min
Lagrange dual method to solve the upper and lower bounds
The interval of the objective values of model ( 1) is obtained by giving its lower bound and upper bound.First, the simpler case to obtain the lower bound is discussed.Since the inner and outer programs of (2) have identical minimization operations, they can be combined into a conventional one-level program, where the constraints of the two programs are simultaneously considered.For x i ,x j �0(i,j = 1,2,� � �,n), we obtain q ij x i x j � qij x i x j � � q ij x i x j In searching for the minimal value of the objective function, parameter qij ð1 � i; j � nÞ must reach its lower bound.Consequently, we have f ðxÞ ¼ min According to the largest feasible region defined by the inequality constraint in [47]and [48], the constraint inequalities can be transformed into can be written as an equivalent model (4): X n j¼1 q ij x i x j s:t: which is a conventional quadratic programming model of portfolio selection.Now, we consider the upper bound � f ðxÞ.Note that for x j �0(j = 1,2,� � �,n), we have Hence max However, from � q ij 2 qij ð1 � i; j � nÞ, we know that max Combining inequalities ( 7) and ( 8), we obtain because � q ij ði; j ¼ 1; 2; . . .; nÞ are real numbers, we denote . . .; n; j ¼ 1; 2; . . .; ng, and replace the variables as follows: ti The upper bound � f ðxÞ is formulated as follows: Solving model ( 10) is slightly difficult because the outer and inner programs have different directions for optimization (one for maximization and the other for minimization).Now, we compute � f ðxÞ.We consider the dual form of the inner problem in (10) as follows: yðl; dÞ ¼ inf : where Q ¼ ðq ij Þ n�n is a symmetric positive semi-definite in model (1).For any λ, δ, θ(λ,δ) is convex function The Lagrange dual method on calculating the upper and lower bounds used in Section 3.2 was first proposed by [33].Then for a special type interval quadratic programming and extended to general interval quadratic programming by [34].For solving interval quadratic programming (12) with both equality and inequality constraints, algorithms established by [36,49].So, we can the variable substitution method r 1i ¼ l 1 ti , r 2i ¼ l 2 li and transform model (11) into model (12) directly by citing [36,49].
Therefore, the lower bound and upper bound of the objective values f ðxÞ and � f ðxÞ are obtained by solving (4) and (12), respectively.Hence, we obtain the intervals of objective functions of the portfolio selection model.
Conventional method to solve the model
To solve the interval programming model, most studies first consider how to convert it into a deterministic model.Many studies converted interval linear programming into deterministic programming in the last decade [50][51].[47]and [48] introduced definitions such as the best optimal value, worst optimal value, maximum range inequality and minimum range inequality, and they solved the interval linear programming problem by transforming it into deterministic programming.Further, these methods are apply to interval linear programming only, while the portfolio selection model discussed in this paper is a quadratic one.It was proved by [36,49] that these methods can be applied to general interval quadratic programming.Therefore, for the general interval quadratic programming model (1) of portfolio selection in this paper, we transform the quadratic programming model (1) into two deterministic programming models ( 13) and ( 14) directly by using the results in [36,49].By solving the quadratic programming models ( 13) and ( 14), we obtain the upper and lower bounds of the objective function of the general interval quadratic programming model ( 1) and compare with the results of the proposed Lagrange dual method in this paper.According to the upper and lower --bounds of the two methods, we can determine the minimum risk portfolio interval.
Numerical examples
This section uses two numerical examples to illustrate the proposed method in this paper to solve a general interval quadratic programming model for portfolio selection.We solve the proposed model using the Lagrange dual method (method 1) in this paper and conventional method (method 2) in Section 3.3.To avoid the occasional results of an experiment and ensure the effectiveness of the results, this paper uses two examples to verify.
4.1.1Solution of method 1.The general interval quadratic programming models ( 4) and ( 12) were used to solve the portfolio selection based on Lagrange dual method.By substituting the data of Section 4.1 into models ( 4) and ( 12), we obtain -Using the function quadprog in MATLAB, we derived the optimum solution f ðxÞ; � f ðxÞ.The investment proportions are as follows: The lower bound of the objective function: x ¼ ð0:0352; 0:8197; 0:1451Þ; f ðxÞ ¼ 0:0181: The upper bound of the objective function: x ¼ ð0:0188; 0:0365; 0:9447Þ; � f ðxÞ ¼ 0:0537: Combining these results, we conclude that the objective values of this general interval quadratic programming is in the range of f(x) = [0.0181,0.0537].
Solution of method 2.
According to the data in Section 4.1, we obtain the optimal solutions that represent the upper and lower bounds of the objective function of model ( 1) by solving models ( 13) and ( 14) in Section 3.3, respectively.The results are as follows: The [53], we see that f 1 �f 2 .We compare f 1 and f 2 according to the deterministic interval relation (3) in [53].Since m(f 1 ) = 0.0359<m(f 2 ) = 0.0384, f 1 is better than f 2 .Furthermore, f 1 is clearly better than f 2 because P (f 1 <f 2 ) = 0.5328, which can be obtained by the interval possibility degree in [51].
In summary, based on the deterministic interval order relation and interval possibility degree, the above results show that the Lagrange dual method of the proposed model in this paper is better than the other method.Moreover, in the actual investment process, according to the method of this paper, the investors can select their preferences based on a specific portfolio plan for forecasting.
Example 2
We selected fifteen types of securities of Shanghai Stock Exchange from September 2006 to September 2018: Pudong Development Bank, Baiyun Airport, Dongfeng Motor, China International Trade, Initial Share, Shanghai Airport, Baogang Stock, Huaneng International, Wantong Expressway, Huaxia Bank, Minsheng Bank, Minmetals Development, Eastern Airlines, SAIC Group, Guangzhou Development.The monthly opening price, closing price and turnover rate of each stock can be obtained from the Wind database, so we can calculate the intervals of expected rate of return, intervals of variance and covariance risk and turnover rate intervals of the fifteen securities as shown in Tables 1-3.
The minimum expected turnover rate interval of the three securities was set as l0 ¼ ½0:05; 0:35�.
From the relationship shown in S2 Fig and the interval order relation given in [53], we can see that f 1 �f 2 .We compare f 1 and f 2 according to the deterministic interval relation (3) in [53].Since m(f 1 ) = 0.0243<m(f 2 ) = 0.0382, it can be concluded that f 1 is better than f 2 .On the other hand, since P(f 1 <f 2 ) = 0.7097, it is clear that f 1 is better than f 2 , which can be obtained by the interval possibility degree a in [16].
Therefore, based on the deterministic interval order relation and interval possibility degree, the above results show that the Lagrange dual method of the proposed model in this paper is better than the other method.The results show that smaller interval objective values correspond to a smaller risk of the portfolio.In the actual investment process, according to the method of this paper, the investors can select their preferences based on a specific portfolio plan for forecasting.
Conclusions
In the actual investment environment, considering the strong uncertainty in the securities market, the paper describes the uncertainties of the securities risk, return and corresponding liquidity with interval numbers and establishes a new general interval quadratic programming model for portfolio selection.Next, we propose a new efficient numerical method to solve the proposed model based on the Lagrange theorem and duality theory.To show the efficiency of the proposed Lagrange dual method, two numerical examples were illustrated.The numerical experiment results show that the proposed portfolio selection model is more feasible, and the Lagrange dual method is better than the traditional method in finding smaller solution intervals, which implies that smaller interval objective values correspond to smaller a risk of the portfolio.In addition, this provides a new investment idea for the securities investors.In the actual securities market, various forms of transaction costs likely affect the portfolio selection.However, this paper only considers the transaction cost as a linear function.There remains considerable research space to solve the quadratic programming model of portfolio selection for different forms of transaction costs.
4 . 1 . 3
lower bound of objective function is x = (0.0352,0.8197,0.1451),fL (x) = 0.0181.The upper bound of the objective function is x = (0,0.0047,0.9953),fU (x) = 0.0587.Then, the solution interval of the portfolio quadratic programming model with transaction costs is f(x) = [0.0181,0.0587].Comparison of two methods.The solution intervals for the objective function of the portfolio model obtained using the two methods are f 1 = [0.0181,0.0537]and f 2 = [0.0181,0.0587].The relationship between the two intervals is shown in S1 Fig. From the relationship in S1 Fig and interval order relation in
|
2019-03-15T02:58:00.905Z
|
2019-03-13T00:00:00.000
|
{
"year": 2019,
"sha1": "ec9a256d8cf9a2d54d571b7b0f06705e4e8add69",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0212913&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ec9a256d8cf9a2d54d571b7b0f06705e4e8add69",
"s2fieldsofstudy": [
"Mathematics",
"Business"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
}
|
267779441
|
pes2o/s2orc
|
v3-fos-license
|
First clinical evaluation of the safety and efficacy of tarumase for the debridement of venous leg ulcers
Abstract We report the first clinical evaluation of a new enzymatic wound debridement product containing tarumase in venous leg ulcer patients. As a first‐in‐human study, this was a prospective, open‐label, multi‐centre, dose escalation study across five dose cohorts and involving a total of 43 patients treated three times weekly for up to 4 weeks (12 applications). The primary and secondary endpoints of the study were to assess the systemic safety, local tolerability, and early proof of concept both for wound debridement and healing. Results indicated that the tarumase enzyme was well tolerated when applied topically to wounds, with no indications of systemic absorption, no evidence of antibody generation, and no systemic effects on coagulation pathways. Locally, there was no evidence of pain on application, no local itching, no increases in erythema, oedema, exudate or bleeding and only a few treatment emergent adverse events were reported. As the concentration of tarumase was escalated, trends towards faster and improved effectiveness of wound debridement were observed, especially in patients with significant slough at baseline. Trends towards faster rates of healing were also noted based on observations of increased granulation tissue, increased linear healing and reduction in surface area over the 4‐week treatment period.
Key Messages
• The study provides an overview of the safety and efficacy results achieved from a first-in-human (Phase IIa) clinical trial of tarumase used for the enzymatic debridement of a Venous Leg Ulcer (VLU) patient population.• The enzyme had an excellent clinical safety profile, both with respect to systemic and local safety, consistent with findings previously reported from non-clinical studies. 1 The safety profile was such that higher concentrations of tarumase can be taken into further Phase II clinical trials to optimise clinical effects.• Proof of concept was established that tarumase had the capacity to debride sloughy VLU wounds; trends towards faster and more complete debridement and trends towards improved healing were observed with increased concentrations of tarumase applied.• Further clinical studies that utilise randomised controlled groups (with stratification for wound size and percentage of slough) are being planned to further explore the efficacy of tarumase at higher concentrations.
| INTRODUCTION
Unlike acute wounds, chronic or "hard-to-heal" wounds often contain a number of microbial, biochemical and/or cellular abnormalities that prevent or slow progression through to healing.Indeed, the most salient feature of chronic wounds is that they are stubbornly difficult to heal, stalling for months, or even years, enhancing disability and underpinning chronic healthcare challenges.
Recent estimates suggest that, after 20 weeks of treatment, complete wound closure is achieved in as few as 25%-50% of chronic or hard-to-heal wounds, especially venous and diabetic ulcers. 2 To address the burgeoning challenges of chronic wounds, much attention has been given to understanding and improving their clinical management.In this context, the conceptual introduction of the TIME paradigm (Tissue viability, Infection & Inflammation management, Moisture management, Edge of Wound) more than 10 years ago was a fundamental framework for optimising the healing of chronic wounds.Notably, good wound bed preparation and the consistent application of appropriate and effective debridement has been recommended. 3Whilst all elements of the TIME paradigm are critical, they do not necessarily need to occur sequentially and a single intervention that impacts more than one element of the framework may be beneficial.For example, an effective debridement gel, may not only remove nonviable tissue, but could potentially reduce bacterial load and/or biofilm and may additionally optimise the wound environment for healing by management of moisture and/or pH. 4 Proactive, continuous debridement is often thought to be necessary to effect sustained improvement over many weeks.There have been many different approaches to debridement, but enzymatic debridement, once a reliable tool in chronic wound management, has, with the exception of collagenase, largely fallen away in favour of more physical/abrasive or sharp scalpel/surgical methods potentially resulting in pain and bleeding complications for the patient. 5Moreover, these techniques often require either expensive equipment, specialised training and/or limitations on where such procedures can take place. 4here is, therefore, an unmet need to develop interventions which improve wound bed preparation but at the same time are easy to use, affordable, and do not increase pre-existing wound pain and bleeding.
Tarumase is a naturally-occurring trypsin serine protease enzyme with selective activity against fibrin, collagen, and elastin; it is derived recombinantly from medical maggots, and can be administered topically, for example, as a gel.This approach can, therefore, leverage the known attractive properties of maggot debridement therapy (MDT) but without the accompanying inconvenience, logistical challenges and issues relating to patient acceptability.Non-clinical studies of tarumase have shown multiple positive findings, including selective targeting of fibrin, reduction in wound bed bacterial load, and promising healing outcomes. 1However, there has not been a prior report of the use of tarumase in human subjects, and thus its properties and safety profile in clinical settings such as chronic wounds are not known.
In this paper, and in light of previously reported nonclinical experience, 1 we present clinical data using tarumase combined with a proprietary hydrogel formulation for the management of Venous Leg Ulcer (VLU) wounds.We report the results from a first-in-human Phase IIa study of the application of tarumase in humans for the treatment of chronic wounds.We hypothesised that tarumase would show time and concentration dependent solubilisation of chronic wound bed devitalised tissue, with no appreciable safety signals in particular, no additional pain signals.
| MATERIALS AND METHODS
The clinical study (Clinicaltrials.govregistration NCT04956900; EudraCT 2020-001392-32) was a prospective, open-label, multi-centre, dose escalation study that was conducted in accordance with the requirements of the International Council on Harmonisation Good Clinical Practice (ICH GCP) and in accordance with national regulations and guidance of the United Kingdom, Hungary and United States.It was conducted across a total of 8 clinical centres, with the primary objective of establishing systemic safety (adverse events, pharmacokinetics, anti-drug antibodies) and local tolerability of a topically applied (tarumase containing) hydrogel (Aurase Wound Gel, SolasCure Ltd), dosed 2-3 times weekly over a 4-week period (12 doses) to a clinically relevant patient population of sloughy VLUs.Secondary objectives included the exploration of effects on rate and/or extent of wound debridement and wound healing trajectory.
Patients enrolled to the study were required to be ≥18 years of age, to have provided written informed consent, to be in a good general state of physical and mental health, as assessed by the investigator and to have a confirmed VLU (2-50 cm 2 in size) present for more than 30 days but less than 2 years, and which contained sufficient non-viable tissue requiring clinical debridement.Exclusions included abnormal blood laboratory and/or vital signs; clinical signs of infection during screening (including use of oral or IV antibiotics); bleeding disorders and/or use of anti-thrombotic therapy in the screening period; deep ulcers (i.e., exposed tendons, ligaments, muscle or bone); wounds with high levels of exudate; prior skin graft; use of Negative Pressure Wound Therapy (NPWT); systemic or cutaneously applied growth factors; use of other enzymatic debriding agents or live maggot therapy within 2 weeks before screening; and pregnant or breastfeeding women.There was no stratification of patients in the study; patients were simply enrolled sequentially, provided they met the inclusion/exclusion criteria.
Five cohorts of sequential patients were planned to be enrolled.Cohort 1 (five patients) used the new proprietary gel vehicle only (placebo).The vehicle was designed for modifying the pH of the enzyme to its nominal optimal (pH 7.0-8.0)and optimising moisture to the wound.Cohorts 2-5 additionally utilised increasing concentrations of tarumase in the proprietary vehicle at 1, 2, 5 and 9 U/ mL and aimed to recruit 10 patients per cohort.A safety review panel was employed throughout the study to review the systemic and local safety of the sentinel patient in each cohort (allowing recruitment to the rest of that cohort) and then again at the end of each dosing cohort (to allow the commencement of the next sequential concentration).
Throughout the study, the investigational products were applied topically to the wound bed at a consistent dose (0.4 mL per cm 2 ) and were covered by a moist secondary dressing (foam and film) and compression bandaging as standard of care.At each of the 12 scheduled dressing changes (i.e., every 2-3 days), the wound was cleansed using sterile water or saline, before clinical assessments were performed.No rubbing or drying of the wound (other than air drying) was permitted, to avoid any potential for physical contact with the wound that could constitute mechanical debridement; to this extent changes in wounds observed were due solely to the allocated treatment regime in combination with standard of care.Assessments of wound size and tissue typing in the wound (granulation tissue / non-viable tissue) and extent of granulation tissue / non-viable tissue (as a proportion of wound size) were based on digital images captured using a wound imaging device (Silhouette ® -Aranz Medical) and assessed by the investigator in real time or by an independent reader (offline).Where 100% debridement (i.e.zero non-viable tissue in the wound) or 100% wound healing was achieved during the 4-week period, based on the investigator's assessment, this was considered as the last visit and Last Observation Carried Forward (LOCF) was utilised.
Safety endpoints in the study included assessment of adverse events (all visits), pharmacokinetics immediately before (baseline) and at 5, 10, 15 and 60 min, 2, 4 and 8 h after first and last doses, anti-drug antibodies before (baseline) and at end of 4 weeks, as well as changes in systemic blood clotting factors (Prothrombin Time (PT), Activated Partial Thromboplastin Time (APTT) and Fibrinogen (Factor I)), haematology factors (haematocrit, RBC count, MCV, differential WBC and platelet counts), clinical chemistry factors (total bilirubin, total protein, albumin, AST, ALT, GGT, ALP, Urea, Creatine, Sodium, Potassium and Calcium), urinalysis (pH, protein, glucose, ketones and presence of blood) and changes in ECG (baseline and week 4).
Comprehensive assessments of local tolerability were performed at baseline and at every subsequent dressing change for pain and itch at the wound site (including pain on application of the gel) using an 11-point Numerical Rating Scale (NRS) where 0 was no itch or pain and 10 equated to the worst itch or pain experienced.In addition erythema, oedema, exudate, induration, were assessed on a five-point NRS (none, mild, moderate, marked or severe) and bleeding and infection were recorded as either absent or present.
Efficacy endpoints assessed included the extent and rate of debridement (percentage change in non-viable tissue; total non-viable tissue removal) as well as an assessment of change in granulation tissue, change in wound surface area and rate of linear wound healing, using Gilmann's equation 6 over the 4 week treatment period.
| Patient details, and study course
A total of 43 patients were enrolled to the study and 39/43 (91%) completed the study through to week 4.The median age of the wounds included in the study fell in the range of 3-6 months, with only 12% of participants having a wound greater than 1 year.The background demographics and disposition of the patients for each of the dosing cohorts in the clinical trial are provided for in Table 1 and Figure 1 respectively.
Patients were exposed to gel on a maximum of 12 occasions (every 2-3 days over a 4-week period).Although the biological activity of the tarumase containing gels only extended over a 9-fold concentration range (i.e., 1-9 U/mL), and as dosing was fixed per unit of wound size (0.4 mL/cm 2 ), the extent of total exposure to tarumase actually varied by a factor of 244-fold between the lowest concentration applied to the smallest wound and highest concentration applied to the largest wound during the clinical trial.
| Adverse events (AEs)
Thirty-eight AEs were reported in the clinical study, 30 of which were considered to be wholly unrelated to the application of investigational product.All 38 AEs were graded in respect of severity to be "mild" or "moderate" and resolved during the ongoing course of treatment.No "severe" AEs were noted in any patient.Two patients had a total of four serious AEs (SAEs).These included erysipelas, DVT and septicaemia in one case, and asthenia in the second.In all four SAE reports the investigators considered there to be no suspected relationship to the investigational product, based on patient's prior medical history and onset between last dose of Aurase and onset of symptoms.
The 8 AEs that were considered to be "possibly", "probably" or "almost definitely" related to investigational T A B L E 1 Overview of demographic and key baseline wound measures.product were local to the treated wound site and included maceration of the wound (n = 1), wound/tissue irritation (n = 2), wound infection including cellulitis (n = 3), and other wound complications (n = 2).In this regard the AEs considered related, are entirely consistent with the type and profile of AEs occurring within VLU patients and consistent with the type of AEs often reported within the context of autolytic hydrogels. 7here were four withdrawals from the study, one due to an unrelated AE and one considered related to investigational product (irritation at the wound site).The remaining two withdrawals were unrelated to AEs (one by patient volition and one due to inadvertent recruitment).
| Systemic safety profile
Blood sampling for tarumase (using a validated LC/MS-MS method; Limit of Quantitation 30 ng/mL) showed no evidence of absorption of tarumase into the systemic circulation at time points post application to the wound (5 min through to 8 h) either following 1st application or after the final (12th application), indicating little or no absorption and no indication of accumulation.Similarly, a validated ELISA assay (Cut off limit: 50 ng/mL) showed no evidence for the emergence of Anti-Drug Antibodies (ADA) against tarumase in any of the five cohorts.
As a fibrinolytic enzyme, special attention was paid to potential changes in systemic coagulation parameters, through analysis of PT, APTT and Factor I during the course of the study (representing a total of 506 individual data points across 42 patients).In all cases, PT remained within normal expected limits.All Factor I values fell within the range of 0.8-5.4g/L and all APTT values fell within the range of 24-43.6 s.All but 2.8% of the values were, therefore, within normal population ranges and there were no clinically relevant or meaningful blood coagulation abnormalities reported in the study.
Additionally, all haematology, clinical chemistry, urinalysis, and physical examinations, including ECG measurements, were noted to have no clinically significant changes in response to treatment with tarumase at any concentration tested.
| Local tolerability and pain assessments
Chronic pain, oedema, malodour and erythema are known common problems in chronic VLU wounds. 8This study addressed whether the use of tarumase accentuated these baseline clinical problems.The design of the trial was such that no additional analgesia was permitted, no local anaesthesia was applied to the wounds and woundedge protection was not mandated.
Participants provided for an assessment of pain and itch immediately prior to each wound dressing change, once more within 5 mins of the wound dressing change and yet again within 15-30 min of application of the investigational product.Figure 2A burden associated with the use of the tarumase gel products over the course of 12 applications.Similarly, an assessment of local wound reactions notably, erythema, oedema, exudate, induration, bleeding and infection performed at each dressing change showed no evidence that dosing tarumase gels 2-3 weekly resulted in adverse wound-edge or wound-bed effects, indicating that tarumase was well tolerated in the wound bed at all concentrations tested (see supplementary Table S1).
| Wound debridement
The data shown in Figure 3A indicates encouraging trends and human proof of concept, that increasing the concentration of tarumase containing gels, results in the removal of more non-viable tissue from the VLU wounds over 12 applications, and at the highest concentration tested, (9 U/mL), removal of non-viable tissue was more rapid, with more than 50% mean reduction in non-viable tissue occurring within 3 applications and 75% mean reduction in non-viable tissue over 12 applications.
| Granulation tissue
Similarly, the data, represented in Figure 3B, shows that across all tarumase gel groups, there was an encouraging increase in the percentage surface area of the wound covered by granulation tissue over the 4 weeks of the study.Maximal increases in granulation tissue coverage and rate of granulation tissue increase (from a mean 16.7 ± 15.3% at baseline) was observed at the 9 U/mL concentration of tarumase gel (mean 54.6 ± 33.7% after 3 applications; mean 76.4 ± 26.3% at end of treatment).
| Wound healing
Wound healing during the study was measured both as percentage partial surface area reduction (PAR) of the wound (normalised for wound size) and as a rate of linear wound healing (Gilman's equation) to account for the perimeter of the wounds.PAR indicated reductions in wound size across all treatment groups over the course of the 28-day study (Figure 4) but at varying rates, with the earliest and highest healing response being observed at the highest concentration of tarumase gel (9 U/mL).Mean daily percentage PAR rates over the 28-day study were, however, 1.8% per day for the 1 and 9 U/mL concentrations, with smaller healing rates 0.43%/day and 0.75%/day being observed at 2 and 5 U/mL respectively.When wound perimeter was accounted for using the Gilman equation, the fastest healing rate, determined for linear rate of healing was also observed with 9 U/mL tarumase gel and this was double the healing rate observed for the 1 U/mL dose group (1 U/mL 0.127 mm/day; 2 U/mL 0.192 mm/day; 5 U/mL 0.063 mm/day; 9 U/mL 0.288 mm/day).
| DISCUSSION
We report the first-in-human wound debridement experience with increasing concentrations of a recombinant fibrinolytic (tarumase) enzyme administered from a proprietary hydrogel three times weekly during routine dressing changes, within a population of patients with VLUs.
Rightly, the study had a significant focus on systemic safety and local tolerability of the enzyme in this clinical population, and in this regard, fully met the primary aims of the clinical trial.
Overall, there was an excellent safety profile in all tarumase groups, with no evidence of systemic absorption, no evidence for generation of antibodies to the enzyme, no systemic adverse effects (including on blood coagulation factors) and no local tolerability concerns relating to erythema, oedema, induration, exudate, or bleeding leading to any attributable serious adverse events.Strikingly, there was no evidence of any pain at the wound sites, either immediately after each application of the tarumase gels or after prolonged contact with the wound bed from 12 applications over 4 weeks.We hypothesise that this excellent safety profile is in large part, due to the parasitic maggot from which the enzyme was originally identified and cloned, and which has sought through evolution, to minimise its impact on the host.
As a first-in-human safety study, we used small numbers of patients in the trial, who were neither stratified nor randomised to treatment, but allocated in escalating concentrations based on a successful safety outcome in the prior concentration.In this context there are recognised limitations to the design of the trial to provide for robust efficacy data; notably, the non-uniform distribution of ages of participants and sizes of the wounds enrolled into each cohort may have introduced bias in respect of wound healing (PAR) and/or granulation tissue interpretations.
Notwithstanding the limitations of the trial design, exploratory analyses were performed to investigate proof of concept in respect of debridement and wound healing potential in the patient population.In one such analysis (Figure 3A) we assessed the mean change in non-viable tissue from baseline after 3 applications (1 week), 6 applications (2 weeks), 9 applications (3 weeks) and 12 applications (4 weeks) in patients, who at baseline, had a large proportion of non-viable tissue (≥50%) covering the wound surface area.As a corollary to the data on debridement, additional exploratory analysis was also made in the change in granulation tissue, in patients that started the clinical study with ≤50% of the surface area of the wound covered with granulation tissue (Figure 3B).As no patients in the vehicle group started with ≥50% non-viable tissue (or ≤50% granulation tissue), comparisons focussed on the tarumase gel groups only.
These exploratory analyses provided an early opportunity, within the confines of an otherwise safety study, to examine the potential clinical utility of the enzyme for debridement and wound bed preparation.In this context, we observed trends towards both timeand enzyme concentration dependent reduction in wound bed non-viable tissue, and in parallel increases in wound bed granulation tissue, reductions in wound surface area and increased rates of linear wound healing.
Taking into consideration all of these factors, we believe that these data establish the potential clinical utility of tarumase gels and makes a strong case for further, more robust, clinical studies to better characterise the optimal enzyme concentration needed to achieve timely and complete wound bed debridement, based on a regimen consistent with standard dressing changes.Importantly, from a development perspective the excellent safety profile observed, means that there are few barriers to exploring even higher concentrations of enzyme.
Clearly the work reported here while encouraging, is preliminary, and now must be expanded in the setting of larger clinical trials, with randomisation to vehicle versus enzyme-bearing gel, using stratification to account for factors that may affect debridement and wound healing, and to study clinical effects over a clinically meaningful period of 12 weeks.
F
illustrates the pain and Figure 2B itch outcomes across the various enzyme dosing cohorts and indicates no additional pain or itch I G U R E 1 Consort diagram showing patient disposition.
2 F
of Treatment (i.e. 12 applications over 28 days) Error bars show Standard Deviation of Mean Baseline and end of treatment (A) pain and (B) itch NRS scores.I G U R E 3 (A) Baseline and mean percentage reductions in non-viable tissue in patients with ≥50% slough at baseline (B) baseline and mean increases in granulation tissue in patients with ≤50% granulation tissue at baseline.
4 F I G U R E 4
Normalised percentage partial area reductions [PAR] in Tarumase treatment groups over 4 weeks.
|
2024-02-23T06:17:13.907Z
|
2024-02-22T00:00:00.000
|
{
"year": 2024,
"sha1": "3b56000bd8bf2cbfdf7309ee5fa8191d9a44a826",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.14805",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8375397a5eadaaa58d1239bd3053c3704ed026e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16397912
|
pes2o/s2orc
|
v3-fos-license
|
On the existence of three solutions for the Dirichlet problem on the Sierpinski gasket
We apply a recently obtained three critical points theorem of B. Ricceri to prove the existence of at least three solutions of certain two-parameters Dirichlet problems defined on the Sierpinski gasket. We also show the existence of at least three nonzero solutions of certain perturbed two-parameters Dirichlet problems on the Sierpinski gasket, using both the mountain pass theorem of Ambrosetti-Rabinowitz and that of Pucci-Serrin.
Introduction
The celebrated three critical points theorem obtained by Ricceri in [15] turned out to be one of the most often applied abstract multiplicity results for the study of different types of nonlinear problems of variational nature. In this sense we refer to the references listed in [16]. Also, this three critical points theorem has been extended to certain classes of non-smooth functions (see, for example, [2], [3], [12]). Ricceri has published both a revised form of his three critical points theorem ( [16]) and a refinement of it ( [17]). A corollary of the latter, stated also in [17], is the following result: Theorem 1.1. Let X be a separable and reflexive real Banach space, and Φ, J : X → R functionals satisfying the following conditions: (i) Φ is a coercive, sequentially weakly lower semicontinuous C 1 -functional, bounded on each bounded subset of X, and whose derivative admits a continuous inverse on X * . (ii) If (u n ) is a sequence in X converging weakly to u, and if lim inf n→∞ Φ(u n ) ≤ Φ(u), then (u n ) has a subsequence converging strongly to u. (iii) J is a C 1 -functional with compact derivative. Then, for each compact interval [λ 1 , λ 2 ] ⊂] 1 ρ2 , 1 ρ1 [ (where, by convention, 1 0 := ∞ and 1 ∞ = 0), there exists a positive real number r with the following property: For every λ ∈ [λ 1 , λ 2 ] and for every C 1 -functional Ψ : X → R with compact derivative there exists δ > 0 such that, for every η ∈ [0, δ], the equation has at least three solutions in X whose norms are less than r.
In the present paper we show with the aid of Theorem 1.1 that, under suitable assumptions on the functions f, g : V × R → R, the following two-parameters Dirichlet problem defined on the Sierpinski gasket V in R N −1 has at least three solutions So far we know, this would be the first application of a Ricceri type three critical points theorem to nonlinear partial differential equations on fractals. (Among the contributions to the theory of nonlinear elliptic equations on fractals we mention [4], [5], [7], [8], [9], [19]).
We also study, in a particular case, a perturbed version of problem (DP λ,η ). A similar problem, but involving the p-Laplacian, has been recently investigated in [1].
Notations. We denote by N the set of natural numbers {0, 1, 2, . . . }, by N * := N \ {0} the set of positive naturals, and by | · | the Euclidian norm on the spaces R n , n ∈ N * .
If X is a topological space and M a subset of it, then M and ∂M denote the closure, respectively, the boundary of M .
If X is a normed space and r a positive real, then B r stands for the open ball with radius r centered at the origin.
The Sierpinski gasket
In its initial representation that goes back to the pioneering papers of the Polish mathematician Waclaw Sierpinski (1882-1969), the Sierpinski gasket is the connected subset of the plane obtained from an equilateral triangle by removing the open middle inscribed equilateral triangle of 4 −1 the area, removing the corresponding open triangle from each of the three constituent triangles, and continuing this way. The gasket can also be obtained as the closure of the set of vertices arising in this construction. Over the years, the Sierpinski gasket showed both to be extraordinarily useful in representing roughness in nature and man's works. We refer to [18] for an elementary introduction to this subject and to [20] for important applications to differential equations on fractals.
We now rigorously describe the construction of the Sierpinski gasket in a general setting. Let N ≥ 2 be a natural number and let p 1 , . . . , p N ∈ R N −1 be so that Obviously every S i is a similarity with ratio 1 2 . Let S := {S 1 , . . . , S N } and denote by F : P(R N −1 ) → P(R N −1 ) the map assigning to a subset A of R N −1 the set It is known (see, for example, Theorem 9.1 in [6]) that there is a unique nonempty compact subset V of R N −1 , called the attractor of the family S, such that F (V ) = V (that is, V is a fixed point of the map F ). The set V is called the Sierpinski gasket (SG for short) in R N −1 . It can be constructed inductively as follows: Taking into account that the maps S i , i = 1, N , are homeomorphisms, we conclude that V * is a fixed point of F . On the other hand, denoting by C the convex hull of the set {p 1 , . . . , p N }, we observe that S i (C) ⊆ C for i = 1, N . Thus V m ⊆ C for every m ∈ N, so V * ⊆ C. It follows that V * is nonempty and compact, hence V = V * . In the sequel V is considered to be endowed with the relative topology induced from the Euclidean topology on R N −1 . The set V 0 is called the intrinsic boundary of the SG.
The family S of similarities satisfies the open set condition (see pg. 129 in [6]) with the interior int C of C. (Note that int C = ∅ since the points p 1 , . . . , p N are affine independent.) Thus, by Theorem 9.3 of [6], the Hausdorff dimension d of V satisfies the equality In other words, the support of µ coincides with V . We refer, for example, to [4] for the proof of (2.1).
3.
The space H 1 0 (V ) We retain the notations from the previous section and briefly recall from [7] the following notions (see also [8] and [10] for the case N = 3). Denote by C(V ) the space of real-valued continuous functions on V and by The spaces C(V ) and C 0 (V ) are endowed with the usual supremum norm || · || sup . For a function u : V → R and for m ∈ N let We have W m (u) ≤ W m+1 (u) for every natural m, so we can put Define now It turns out that H 1 0 (V ) is a dense linear subset of L 2 (V, µ) (equipped with the usual || · || 2 norm). We now endow H 1 0 (V ) with the norm ||u|| = W (u).
In fact, there is an inner product defining this norm: For u, v ∈ H 1 0 (V ) and m ∈ N let Then W(u, v) ∈ R, and H 1 0 (V ), equipped with the inner product W (which obviously induces the norm ||·||), becomes a real Hilbert space. Moreover, if c := 2N +3, then ||u|| sup ≤ c||u||, for every u ∈ H 1 0 (V ), and the embedding We now state a useful property of the space H 1 0 (V ) which shows, together with the facts that (
The Dirichlet problem on the Sierpinski gasket
Keep the notations from the previous sections. We also recall from [7] (respectively, from [8] and [10] in the case N = 3) that one can define in a standard way a bijective, linear, and self-adjoint operator ∆ : The operator ∆ is called the weak Laplacian on V .
Remark 4.1. Theorem 19.B of [22], applied to ∆ −1 : L 2 (V, µ) → L 2 (V, µ), yields in particular that H 1 0 (V ) is separable (see also sections 19.9 and 19.10 in [22]). Given a continuous function h : V × R → R, we can formulate now the following Dirichlet problem on the SG: Find appropriate functions u ∈ H 1 0 (V ) (in fact, u ∈ D) such that Remark 4.2. Using the regularity result Lemma 2.12 of [7], it follows that every weak solution of problem (P ) is actually a strong solution (as defined in [7]). For this reason we will call in the sequel weak solutions of problem (P) simply solutions of problem (P ).
Before defining the energy functional attached to problem (P ) we recall a few basic notions. ( In this case the mapping T ′ : E → E * assigning to each point u ∈ E the Fréchet differential of T at u is called the Fréchet derivative, or, shortly, Then the mapping J : satisfies the following properties: Proof. a) The proof of Proposition 2.19 in [7] implies that J is a C 1 -functional and that its derivative J ′ : b) To show that J ′ is compact, pick a bounded sequence (u n ) in H 1 0 (V ). Since H 1 0 (V ) is reflexive and since the embedding (3.4) is compact, there exists a subsequence of (u n ) which converges in (C 0 (V ), || · || sup ). Without any loss of generality we can assume that (u n ) converges in (C 0 (V ), || · || sup ) to an element u ∈ C 0 (V ).
According to (3.3), the functional T belongs to (H 1 0 (V )) * . We next show that the sequence (J ′ (u n )) converges to T in (H 1 0 (V )) * . By (3.3) the following inequality holds for every index n Using the Lebesgue dominated convergence theorem, we conclude that (J ′ (u n )) converges to T in (H 1 0 (V )) * . Thus J ′ is compact. c) The assertion follows from b) and Corollary 41.9 of [21]. We also give a direct proof: Clearly H is continuous. Let (u n ) be a sequence which converges weakly to u in H 1 0 (V ). Since the embedding (3.4) is compact, (u n ) converges to u in (C 0 (V ), || · || sup ). The Lebesgue dominated convergence theorem implies now that (J(u n )) converges to J(u). Thus J is sequentially weakly continuous.
where J : , is a C 1 -functional and its derivative I ′ : H 1 0 (V ) → (H 1 0 (V )) * is given by In particular, u ∈ H 1 0 (V ) is a solution of problem (P ) if and only if u is a critical point of I.
Proof. See Proposition 2.19 in [7]. We now state for later use some fundamental properties of the energy functional I. If (u n ) is a bounded sequence in H 1 0 (V ) such that the sequence (I ′ (u n )) converges to 0, then (u n ) contains a convergent subsequence.
Proof. Using Proposition 4.6, we know that for every index n I ′ (u n ) = W(u n , ·) − J ′ (u n ).
Assertion b) of Proposition 4.5 yields now the conclusion.
A Dirichlet problem depending on two parameters
Let f, g : V × R → R be continuous, and define the functions F, G : For every λ, η ≥ 0 consider the following Dirichlet problem on the SG By Proposition 4.6, the energy functional attached to the problem (DP λ,η ) is the map I : H 1 0 (V ) → R defined by The aim of this section is to apply Theorem 1.1 to show that, under suitable assumptions and for certain values of the parameters λ and η, problem (DP λ,η ) has at least three weak solutions. More precisely, we can state the following result.
Theorem 5.1. Assume that the following hypotheses hold: The function F : V × R → R satisfies the following conditions: (2) There exist t 0 > 0, M ≥ 0 and β > 2 such that (3) There exists t 1 ∈ R \ {0} such that for all x ∈ V and for all t between 0 and t 1 we have F (x, t 1 ) > 0 and F (x, t) ≥ 0.
Then there exists a real number Λ ≥ 0 such that, for each compact interval [λ 1 , λ 2 ] ⊂ ]Λ, ∞[, there exists a positive real number r with the following property: For every λ ∈ [λ 1 , λ 2 ] and every continuous function g : V × R → R there exists δ > 0 such that, for each η ∈ [0, δ], the problem (DP λ,η ) has at least three solutions whose norms are less than r.
Proof. Set X := H 1 0 (V ). Then X is separable (by Remark 4.1) and reflexive (as a Hilbert space). Define the functions Φ, J : X → R for every u ∈ X by In order to apply Theorem 1.1, we show that the conditions (i)-(v) required in this theorem are satisfied for the above defined functions. Clearly condition (i) of Theorem 1.1 is satisfied. (Note that Φ ′ : X → X * is defined by Φ ′ (u)(v) = W(u, v) for every u, v ∈ X.) Condition (ii) is a consequence of the facts that X is uniformly convex and that Φ is sequentially weakly lower semicontinuous. Condition (iii) follows from assertions a) and b) of Proposition 4.5. Obviously condition (iv) holds for u 0 = 0.
Hence the following inequality holds for every
Since β > 2, we obtain The inequalities (5.1) and (5.2) yield that Without loss of generality we may assume that the real number t 1 in condition (3) of (C2) is positive. Lemma 3.1 implies that |u| ∈ H 1 0 (V ) whenever u ∈ H 1 0 (V ). Thus we can pick a function u ∈ H 1 0 (V ) such that u(x) ≥ 0 for every x ∈ V , and such that there is an element x 0 ∈ V with u(x 0 ) > t 1 . It follows that U := {x ∈ V | u(x) > t 1 } is a nonempty open subset of V . Let h : R → R be defined by h(t) = min{t, t 1 }, for every t ∈ R. Then h(0) = 0 and h is a Lipschitz map with Lipschitz constant L = 1. Lemma 3.1 yields that u 1 := h • u ∈ H 1 0 (V ). Moreover, u 1 (x) = t 1 for every x ∈ U , and 0 ≤ u 1 (x) ≤ t 1 for every x ∈ V . Then, according to condition (3) of (C2), we obtain F (x, u 1 (x)) > 0, for every x ∈ U, and F (x, u 1 (x)) ≥ 0, for every x ∈ V.
Together with (2.1) we then conclude that J(u 1 ) > 0. Thus Relations (5.3) and (5.4) finally imply that assertion (v) of Theorem 1.1 is also fulfilled. Put Λ := 1 ρ2 (with the convention 1 ∞ := 0). Note that if g : V × R → R is continuous, then the map Ψ : X → R, defined by is, by the assertions a) and b) of Proposition 4.5, a C 1 -functional with compact derivative. So, applying Theorem 1.1 and Proposition 4.6, we obtain the asserted conclusion.
A perturbed two-parameters Dirichlet problem
Now we study, in a particular case, a perturbed version of the two-parameters problem (DP λ,η ) of the previous section. More exactly, for fixed reals r, s, q with 1 < r < s < 2 < q and for the parameters λ, η ≥ 0, consider the following Dirichlet problem on the SG: where we put, by definition, |0| ℓ · 0 := 0, for every ℓ < 0. By Proposition 4.6 and Remark 4.7, the map I λ,η : is the energy functional attached to problem (P λ,η ). The derivative of this map is given, for every u, v ∈ H 1 0 (V ), by For the sake of completeness we recall the two mountain pass theorems that will be used to prove the main result of this section. The first one is the celebrated mountain pass theorem due to Ambrosetti and Rabinowitz (e.g., Theorem 2.2 in [14]): Theorem 6.1. Let X be a real Banach space and let I : X → R be a C 1 -functional satisfying the Palais-Smale condition. Furthermore assume that I(0) = 0 and that the following conditions hold: (i) There are reals ρ, α > 0 such that I| ∂Bρ ≥ α.
(ii) There is an element e ∈ E \ B ρ such that I(e) ≤ 0. Then the real number κ, characterized as The next result generalizes the above Theorem by weakening condition (i). It goes back to P. Pucci and J. Serrin, and can be found in [13]. Theorem 6.2. Let X be a real Banach space and let I : X → R be a C 1 -functional satisfying the Palais-Smale condition. Furthermore assume that I(0) = 0 and that the following conditions hold: (i) There exists a real number ρ > 0 such that I| ∂Bρ ≥ 0. (ii) There is an element e ∈ E \ B ρ with I(e) ≤ 0.
Then the real number κ defined in (6.3) is a critical value of I with κ ≥ 0. If κ = 0, there exists a critical point of I on ∂B ρ corresponding to the critical value 0.
We also recall two standard results concerning the existence of minimum points of sequentially weakly lower semicontinuous functionals. We next establish some important properties of the energy functional I λ,η : H 1 0 (V ) → R attached to problem (P λ,η ).
such that both of the sequences (I λ,η (u n )) and (I ′ λ,η (u n )) are bounded, then (u n ) is bounded, too.
Proof. Let d be a real number such that I λ,η (u n ) ≤ d and ||I ′ λ,η (u n )|| ≤ d for every index n. Relations (6.1) and (6.2) yield for every index n Using (3.3), we get we conclude that the sequence (u n ) has to be bounded.
Proposition 6.6. Let λ, η ≥ 0. The energy functional I λ,η : H 1 0 (V ) → R attached to problem (P λ,η ) has the following properties: is a solution of problem (P λ,η ) if and only if u is a critical point of I λ,η . c) I λ,η is sequentially weakly lower semicontinuous. d) I λ,η satisfies the Palais-Smale condition. e) 0 is a local minimum of I λ,η .
Now we can state the main result of this section.
|
2016-01-11T18:29:14.669Z
|
2010-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "fccca9c1e2fa1925345cc67343553227ffd2ebc7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1602.06092",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fccca9c1e2fa1925345cc67343553227ffd2ebc7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
268574674
|
pes2o/s2orc
|
v3-fos-license
|
Endoscopic removal of 2 types of eroded gastric bands using endoscopic scissors
Video Video 1 XXX.
As obesity rates continue to increase, so too do rates of bariatric surgery. 1,2Two previously common types of bariatric surgery are the adjustable laparoscopic gastric band (Lap-Band) and vertical banded gastroplasty (VBG).Although both surgeries have now fallen out of favor, gastroenterologists still encounter patients with adverse events from these procedures, as they were popular in the 1990s and 2000s.Both procedures involve placement of a restrictive band to create a small stomach pouch; however, in the Lap-Band, the gastric band is much thicker in width and diameter, making them more difficult to sever.[6][7]
PROCEDURE
In the first case, a patient presented following Lap-Band placement with nausea, vomiting, and weight regain.Initial testing identified a 75% eroded band into the gastric lumen (Fig. 2).It is important for endoscopists to know that an adjustable gastric band includes an attached, subcutaneous access port, which means that any attempt at endoscopic removal requires a combined endoscopic/surgical approach.First, the endoscope (GIF-2T180; Olympus, Center Valley, Pa, USA) was advanced into the stomach.If a double-channel therapeutic endoscope is not available, a single-channel therapeutic endoscope can also be substituted.The connection tubing was cut and removed surgically (Fig. 3).In this case, a laparoscopic approach was pursued, given the intention to pursue surgical conversion to a Roux-en-Y gastric bypass for weight regain.The band itself is usually encapsulated within fibrous scar tissue, which is why surgical removal from the peritoneal side can be challenging.Entering this capsule can result in gastric perforation, and so when feasible, endoscopic removal is often preferred.The rat-tooth forceps (Micro-Tech Endoscopy, Ann Arbor, Mich, USA) was then used to grasp the eroded band and to provide traction.The endoscopic scissors (Ensizor; Apollo Endoscopy, Tex, USA) were then used to sever the gastric band (Fig. 4).The endoscopic scissors have a blade length and open aperture of 5.08 mm, permitting them to accommodate the width of the gastric band.It is important that each cut is directed along the same axis in order to create a dissection plane, to allow for timely and safe severance (Video 1, available online at www.videogie.org).In this case, 40 to 50 total "bites" were required.Once severed, the freed band can be removed transorally using the retrieval forceps.After the procedure, the patient was admitted for observation in accordance with routine postoperative recommendations.She recovered well and was discharged the following morning after advancing her diet.
In the second case, a patient presented following VBG with nausea, vomiting, and epigastric pain and was found to have a 75% eroded gastric band (Fig. 5).The gastric band was severed using endoscopic scissors and removed transorally (Fig. 6, Video 1).Given that there was no attached connection tubing or subcutaneous portion, laparoscopic removal was avoided.Following band removal, a large remnant tissue bridge was also resected, which in this case was also contributing to the patient's obstructive symptoms.The tissue bridge was first injected with 4 mL of 1:1000 epinephrine solution, followed by transection using a dedicated endoscopic knife (SB knife; Olympus Corp) with electrocautery (Endocut Effect 3; ERBE, Tübingen, Germany) (Fig. 7).The cut edges of the tissue bridge were then closed using hemostatic clips and the endoscopic tack and suture device (X-tack; Apollo Endoscopy) to treat any unappreciated, microscopic perforation.The patient was discharged on the same day with plans for follow-up endoscopy in 3 months.gastric bands using endoscopic scissors.We highlight the importance in distinguishing between a Lap-Band and VBG, as the management approach differs because of the presence of attached connection tubing in the former.Alternative methods of endoscopic removal involve using a guidewire wound by a mechanical lithotripter to sever the band. 8,9In comparison, the endoscopic scissor-assisted technique is less technically complex and does not require expertise in using the mechanical lithotripter.
DISCLOSURE
Dr Abu Dayyeh is a consultant for Endogenex, Endo-TAGGS, Metamodix, BFK, USGI, Cairn Diagnostics, Aspire Bariatrics, and Boston Scientific; has received research grants from USGI, Cairn Diagnostics, Aspire Bariatrics, and Boston Scientific; has received speaker honoraria from Olympus, Johnson & Johnson, Medtronic, and Endogastric Solutions; and has received grant research support from Medtronic, Endogastric Solutions, Apollo Endosurgery, and Spatz Medical.Dr Storm has received research grants from Apollo Endosurgery, Boston Scientific, Endogenex, Endo-TAGGS, and Enterasense, and is a consultant for Apollo Endosurgery, Boston Scientific, ERBE Elektromedizin, Intuitive, Medtronic, and
Figure 1 .
Figure 1.Illustrated comparison of an adjustable laparoscopic gastric band (Lap-Band; left) versus vertical banded gastroplasty (VBG; right).The Lap-Band consists of a thick silicone elastomer.In comparison, the gastric band in a VBG is much slimmer and easier to sever.
Figure 2 .
Figure 2. A migrated gastric band following adjustable laparoscopic gastric band, now with 75% of its circumference eroding into the gastric lumen.
Figure 3 .
Figure 3. Laparoscopic removal of the adjustable laparoscopic gastric band's connection tubing.
Figure 4 .
Figure 4.The endoscopic portion of the migrated adjustable laparoscopic gastric band is severed using endoscopic scissors.
Figure 5 .
Figure 5.A migrated gastric band following vertical banded gastroplasty, now with 75% of the gastric band eroding into the stomach lumen.
Figure 6 .
Figure6.The gastric band is severed using endoscopic scissors.Given that there is no attached connection tubing or subcutaneous portion with a vertical banded gastroplasty, laparoscopic removal can be avoided.
Figure 7 .
Figure 7. Transection of a post-vertical banded gastroplasty tissue bridge using a dedicated endoscopic knife.The cut edges were then closed using hemostatic clips and an endoscopic tack and suture system.
|
2024-03-22T15:52:19.812Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "0ab372667814c75e1b1b5dc01f47a39d03e72166",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.vgie.2024.03.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db4ecdb4d21c223fcbe60e66d349275c6fc084eb",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
86088902
|
pes2o/s2orc
|
v3-fos-license
|
New records of reptiles and amphibians from Bhutan
Acknowledgements: This paper is the result of contribution of data and photographs by Dorji Wangchuk, Researcher at Royal Manas National Park, Sarpang Bhutan, Karma Wangdi, Forester and Bird Sherub, Dy. Chief Forest Officer, who work at Ugyen Wangchuk Institute for Conservation of Environment, based in Bumthang. With deep appreciations, efforts put in by Dr. D.B. Gurung, Professor of the College of Natural Resources (CNR), Royal University of Bhutan, Lobesa and the former Lobesans (students of CNR, Lobesa), Yee J, Baep Tshering and Sherub Jamtsho (all Forest Rangers) who work in different Districts of the Kingdom are also included in this paper. Without their support, this paper could never have been written. In my office in Trashigang, Phuntsho Wangdi, Forest Ranger deserves a mention for helping me make maps and confirming if GPS readings collected by various individuals corresponded to the species. Thanks are also due to my field colleagues in Trashigang office who always give me preferences to complete any task I plan by taking my share of official work on their shoulders. Abstract: Thirteen new species of anurans that include six dicroglossids (Euphlyctis cyanophlyctis, Fejervarya pierrei, F. teraiensis and F. nepalensis, from Samdrup Jongkhar, Nanorana conaensis and N. pleskei from Haa), three megophryids (Xenophrys major and X. glandulosa from Trashiyangtse, and X. minor from Mongar) and four ranids (Amolops mantzorum from Trashiyangtse, Hylarana taipehensis and Sylvirana leptoglossa from Samdrup Jongkhar and S. cf. guentheri from Mongar) and one testudine a geoemydid (Melanochelys trijuga from Sarpang), one sauria an anguid (Ophisaurus gracilis from Zhemgang) and two colubrids (Amphiesma platyceps and Dinodon gammiei both from Paro) are reported for the first time from Bhutan. Discussions have been restricted to their presence and the distribution in and outside Bhutan. As such, this paper provides the geographic locations, morphometric measurements (in some cases), time when they were seen in their habitat and information on who have collected the data of the species reported. The quality of the data is highly variable being collected opportunistically by various individuals from various places over the last six years.
Bhutan is primarily a mountainous country with majority of its inhabitants depending on subsistence farming and sandwiched between the two Asian giants China and India.The country's scientific study moves slow on specific taxa such as herpetofauna, invertebrates, and many others (with the exception of mammals, birds and to some extent plants) because of lack of interest.Moreover, most of the existing research institutes and centres in the country do not focus much on species conservation as their priorities are set for the development of the country including betterment of crop production, marketing and other growthoriented works aimed at improving the life of the rural people keeping in mind, the wholesome concept of gross national happiness.Bhutan's fully authorized lone conservation agent, the Department of Forest and Park Services performs almost all the tasks of main stream species protection, management, conservation and research besides regulating and catering to the daily needs of natural resources for the people of Bhutan.Most of the works done by the department include issuing of resources collection permits, planning of forest management regimes, protected area conservation planning, patrolling the areas where there is a likelihood of illegal extraction of natural resources and poaching of wildlife that consumes lots of time to focus on taxa specific studies.In such a scenario, the lack of information on many taxa from the country is obvious and herpetofauna is no exception.However, the safety of the animals of the country is guaranteed by the very goal of maintaining 60% forest cover to perpetuity enshrined in the Constitution of the Kingdom of Bhutan, which is backed by several policies, acts, laws, rules and regulations that favour an undisturbed forest ecosystem.Recent works on reptiles (Wangyal & Tenzin 2009;Wangyal 2011Wangyal , 2012;;Wangyal et al. 2012) and amphibians (Wangyal & Gurung 2012 a,b) reported many new records for Bhutan which showed that the research on these taxa is still nascent.
This paper reports at least 13 species of anurans, a turtle, lizard and two snakes.For each of these records, a digital photograph, geo-referenced locality data and details of measurement such as the SVL, hind and the fore limb lengths where relevant are provided.
Materials and Methods
Since this paper is the result of information gathered by the author over the past several years, no specific survey methodology can be mentioned.The method used is rather an amalgamation of information from various local and national networks of students, farmers and field colleagues who had most of the time collected data without considering any serious research reporting.Photographs, measurements and habitat information were collected from different sources opportunistically.As such, no systematic surveys were conducted to obtain the data presented in this paper because of which the data are highly variable with varying places, time, date and the seasons.Most of the species information were the result of opportunistic encounters by individuals who submitted the data to the author for species confirmation.Some of the data presented in this paper also include those collected by the field officials of the Department of Forests and Park Services who gathered them wherever and whenever they came across on the encouragement of the author.
Telephonic conversation, email exchanges, visual talks through internets and personal contacts during the official workshops and meetings were very useful in obtaining much of the data provided in this report.Interview of few people in the field, meeting with experts and sending many photographs to experts outside the country for confirmation of species were useful in identifying and authenticating the data collected over the last six years.Physical observation in the field to obtain specific habitat details such as where the species were seen, time of photography, altitude, habitat, coordinates, etc. that were taken with the help of instruments such as altimeter and GPS are considered as field data.Measurements of the body parts of species such as SVL, carapace length, etc. where relevant were taken using a steel tape nearest to millimetre.
Results
Compiling all the past records (Bauer & Günther 1992;Das & Palden 2000;Wangyal & Tenzin 2009;Wangyal 2011Wangyal , 2012;;Wangyal et al. 2012;Wangyal & Gurung 2012a,b), Bhutan thus far has 36 species of amphibians (34 anurans, one caudata, one caecilian), 83 species of reptiles (57 snakes, 20 lizards, one crocodile, five turtles).With this report, 13 anurans, a turtle, a legless lizard and two snakes are added to the country's biodiversity list.Due to varying nature of data collection, some species data were collected even before the latest herpetofaunal reports of the country.Some species information that were collected long ago were not reported in the past due to a really small amount of data available that would be suitable to have an article or a paper.Further, the information available needed double confirmations since all the data was collected by people who lack knowledge on the species.However, now that the species are being reported, the information in this paper is well verified as discussions on species were done adequately with herpetologists known to the author outside Bhutan.
For giving a distribution idea of the species, a map (Fig. 1) has been produced specifying the locations of 13 species of anurans (including also another four anurans that need further confirmation), one each of turtle, a lizard and two snakes collected from different parts of the country using the data collected over the years.Where relevant, morphometric measurements such as SVL, fore limb and hind limb in anurans, carapace size in turtle are given in the species accounts.The geocoordinates of the species locations are tabulated (Table I).This report includes data on coordinates, altitude and names of the places where these species were seen including the identifications of personnel who collected the species information.
Dicroglossidae
Indian Skipping Frog Euphlyctis cyanophlyctis (Schneider, 1799): A specimen (Image 1) measuring SVL = 60mm, Hind Limb = 95mm, Forelimb = 42mm was collected from Tshangkha Lake, Dagana District at 1378m at 1650hr on 14 May 2011.The habitat, a small Lake which is also home to a small population of Tylototriton verrucosus in the middle of the village is a marsh with accumulation of mud from the agricultural fields around it.A farm road passes just on its side.A second specimen (Image 2) measuring SVL = 60mm, Hind Limb = 90mm, Fore Limb = 40mm was observed and photographed by Dorji Wangchuk, a Researcher of Royal Manas National Park at Gelephu on 06 June 2007 at an elevation of 225m, Sarpang District.This particular species was found in a pool created by rainfall in the Gelephu town area.Owing to its wide distribution IUCN considers the species as least concern.Yet another specimen (Image 3) was found in Bhangtar, Samdrup Jongkhar District on 27 October 2011 at 1333hr at an altitude of 253m by Mr. Karma Wangdi, a forester with Ugyen Wangchuck Institute for Conservation and Environment (UWICE) based in central Bhutan's Bumthang District.Ahmed et al. (2009) measured SVL of 65mm for a northeastern Indian specimen which is almost same as the measurements in Bhutanese specimens.Literature reveals active breeding to start by early summer when water temperature rises to 10-12 0 C (Khan & Malik 1987b).Outside Bhutan, the species is found in Afghanistan, Bangladesh, India, Nepal, Pakistan, Sri Lanka, Iran and westwards to Afghanistan up to 1800m (Khan 1997c).
Pierre's Cricket Frog Fejervarya pierrei (Dubois, 1975): An individual (Image 4) with SVL of 26mm was caught and deposited at the laboratory of College of Natural Resources, Royal University of Bhutan.The location data of the habitat, Tshangkha Lake, Dagana District, at an elevation of 1378m was surveyed on 08 May 2011 and the animal was caught at 1838hr while making the rounds of the lake.Several individuals of the species that are known to occupy lowland forests, grasslands and open wet places including the paddy fields retreating in moist and shadowy places and burrows were found around the lake.It is widely found in southern China, South and Southeast Asia.IUCN considers it as Least Concern (Shrestha & Ohler 2004).
Terai Cricket Frog Fejervarya teraiensis (Dubois, 1984): A specimen measuring SVL = 50mm was collected from Gelephu, Sarpang District at an altitude of 255m at 1100hr on 06 June 2007 and later released.However, single photographic evidence (Image 5) was kept for the reporting purpose.The species is known to occupy 1975): Two individual species were photographed in Ngera Ama Ri, a stream in Samdrup Jongkhar District at an altitude of 356m on 29 October 2010 at 1457hr.The area is tropical lowland.A big cream mid-dorsal line that tapers towards the vent is highly conspicuous (Image 6) while in another case, the mid-dorsal line is quite small (Image 7).Dorsum is smooth with longitudinal folds made of oblong tubercles.The venter is uniformly smooth.Basic dorsal colour is grayish-brown with dark, oblong and irregular spots.Sides of the body and the posterior parts of the thighs are marbled.A dark inter-orbital band is disjoined by the mid-dorsal line.Forelimbs have dark stripes and hind limbs are barred and the ventral sides of the hands and feet are with pale metatarsal tubercles and the toe webbing being marbled with dark colour.It differs from others in the group as it has distinct dark bars on the legs.The species is known to occupy brooks and ponds in wooded surroundings and breed in summer.It is native to south Asia including India, Bangladesh, Nepal and now Bhutan.
Cona Spiny Frog Nanorana conaensis (Fei & Huang, 1981): A lone individual (Image 8) was found nearby Tshenchulum Lake, in Haa District on 09 July 2010 at 1304hr at an altitude of 4066m by Karma Wangdi of UWICE.Although described as a species confined to Mama, in Cona County, southern Xizang Autonomous Region, China, it is now reported for the first time from Bhutan as this report confirms its presence from Haa, western Bhutan.Further, Cona is not too far from the spot where this species was found.It is known to occupy small streams in forested and shrubby areas, and breeds in these streams, laying its eggs under stones.
Pleske's High Altitude Frog or Tibetan or Plateau Frog Nanorana pleskei (Gunther, 1896): A lone species of Nanorana pleskei was also found at a place (not far from Noob Tshona Patra) below a day's walk from Tshenchulum Lake, Haa (Images 9, 10) on 08 July 2010 at 1041hr at an altitude of 4083m by the same person who found N. conaensis (not that far from the spot where N. conaensis was seen).Considered native to China, this species found in three provinces in China, viz., Qinghai, Gansu and Sichuan provinces between 3,300-4,500 m (Wang et al. 2004).This is the first report from Bhutan, a sort of range extension.
Megophryidae
White-Lipped Horned Toad Xenophrys major (Boulenger, 1908): There are many unaccounted encounters with this species in Trashiyangtse, Choetenkora.Some civilians are even known to eat the species.An individual (Image 11) was hit by a stray arrow (Bhutanese men enjoy playing archery every little free time they get) just behind the archery range near the Institute for Zorig Choosoom, based in Trashyangtse not far away from the small Choetenkora suburb.The injured toad was collected by the author and kept in the office for at least two nights before it succumbed to its injuries.However, photograph was taken on 26 August 2009 at 0813hr and stuffed at the Bumdeling Wildlife Sanctuary laboratory where it can be assessed even today.
The species is known from Cambodia, China, India, Lao People's Democratic Republic, Myanmar, Thailand, and Vietnam.This species is also widespread in northeastern India states of Arunachal Pradesh and Nagaland.
Glandular Horn Toad Xenophrys glandulosa (Fei, Ye & Huang, 1991): An individual (Image 12) was collected by one Tashi Phuntsho, a forester then working in Bumdeling Wildlife Sanctuary from a nearby stream just in front of his quarters in Choetenkora, Trashiyangtse District on 15 August 2011 at 0740hr.The species was put through to one Kaushik Deuti, a herpetologist at ZSI, Kolkatta who confirmed the identity.The town has a number of streams around and the area remains wet most of the summer.Outside Bhutan, this species is known from Yunnan Province in China and from the northeastern Indian state of Nagaland.
Little Horned Frog Xenophrys minor (Stejneger, 1926): A lone species was photographed (Image 13) from Serzhong, Mongar District under Bumdeling Wildlife Sanctuary on 25 July 2009 at 1030hr at an elevation of 1287m by one Karma Wangdi, who was then a forester in Bumdeling Wildlife Sanctuary.Serzhong is a village under Mongar District between 900-1600 m and is surrounded by Chirpine (Pinus roxburghii) species.The species is also found in China, Thailand and Vietnam.
Ranidae
Mouping Sucker Frog Amolops mantzorum (David, 1872): A specimen was (Image 14) collected from a perennial stream called Serkang Chu, that runs through the suburban Choetenkora town, the headquarters of Trashiyangtse District at an altitude of 1745m on 23 September 2008 at 2201hr.Considered endemic to southeastern Gansu and western Sichuan provinces of China, it has been recorded between 1,000 and 2,800 m.I now report its presence in Bhutan as well.It also is known to occur in India (Meghalaya, Sikkim, Himanchal Pradesh, Assam, Manipur, and West Bengal), Bangladesh and Nepal.
Taipeh Frog Hylarana taipehensis (van Denburgh, 1909): A specimen (Image 15) was collected and photographed from a place called Dungkarling, Phuntshothang with a SVL of 40mm on 23 July 2011 at 2223hr at an elevation of 150m.IUCN considers it as a species of Least Concern (van Dijk et al. 2004).Feeding on insects, grasshoppers, etc. they are found in dense tree masses in groups during daytime (Lue 1990).This species can be found in wet, damp crop fields, ponds, and hills with tea crop plants present (Lue 1990) Gunther's Amoy Frog Sylvirana cf.guentheri (Boulenger, 1882): Webbing between toes is 3/4 similar to what Yang & Rao (2008) suggests as seen in the photograph.Dorsal surface brown with irregular black blotches but the belly is white.Conspicuous longitudinal black marks along the dorsolateral folds visible.The dorsum of hind limbs has black horizontal stripes while the posterior sides of the legs have gray-black spots.As described by Yang & Rao (2008), marks on the dorsal surfaces of the legs are longitudinally arranged in a row.
By way of distribution it is known to be widely distributed in southern China, Hong Kong, Macau, Taiwan, and Viet Nam (Kuangyang et al. 2004), and has also been recently introduced to Guam (Christy et al. 2007).A lowland species, it can be found up to elevations of 1100m (Kuangyang et al. 2004).Photographs of an amplecting pair (Image 17) was taken by Bird Sherub, a Bhutanese ornithologist from Zhonggarchu, Lingmethang, Mongar on 30 September 2010 at 1642hr at an altitude of 606m.
Geoemydidae
Black Pond Turtle Melanochelys trijuga (Annandale, 1930): A sixth species for Bhutan and the first record of Indian Black Turtle or Indian Pond Terrapin, Melanochelys trijuga, a geoemydid specimen (Image 18) was found from Kanamakura, Royal Manas National Park in Sarpang District (Image 19).The terrapin had a carapace length and breadth of 7 and 6 centimeters, respectively, and was found in the grassland on 18 April 2012 at 0618hr at an altitude of 260m while looking for signs of mammals in the area.The identification is based on the shape of its tricarinate carapace which was found to be moderately depressed with lateral margins more or less turned upward and the plastron being dark with each shield having a light margin.Digits were fully webbed while the tail was short.It had flat limbs with yellow reticulations on sides.Olive brown head had arrow head shaped black mark on forehead.Outside Bhutan, the species is found in Nepal, India, Bangladesh and Myanmar.While it is common in India and Nepal, Bangladesh and Myanmar consider it locally endangered.
Anguidae
Burmese Glass Lizard Ophisaurus gracilis (Gray, 1845): A single Burmese Glass Lizard was seen and photographed (Image 20) by one Sherub Jamtsho, a Forest Ranger working for Zhemgang Forest Division as an Officer In Charge of Khomshar Forest Range, Zhemgang District on 02 February 2013 at 1913hr in an open ground near school campus in the middle of the Khomshar Village, Bardo, Zhemgang District at an altitude of 1305m.The habitat is surrounded by paddy and maize fields and the species actually was killed by villagers who mistook it for a snake.
Another specimen (Image 21) was seen in an open country side filled with Eupatorium weeds, ferns, grasses and a few Benthamidia capitata trees.The overall forest surrounding the habitat is dominated by Lithocarpus species with Schima sp., Quercus sp., Exbackliandia sp., Michalia sp., and Daphnephyllum sp., etc. as secondary species.The lizard was spotted at the base of the partially rotten stump with thick mosses and grasses on it while digging pits for plantation near Nimshong-Phumithang Dratshang under Shingkhar, Zhemgang District on 07 June 2012 at 0953hr at an altitude of 1866m.The quality of the image is poor since a mobile camera was used due to unavailability of better camera at that point of time.
Outside Bhutan, this legless lizard is known to occur in northeastern India, southern China, northern Myanmar, Laos, Thailand and Vietnam.
Colubridae
Himalayan Mountain Keelback Amphiesma platyceps (Blyth, 1854): One Ugyen Dorji, a Forest Ranger working for the National Land Commission for cadastral survey in Paro District found a Himalayan Mountain Keelback (Image 22) near Kuenga School, on the banks of Pachu, a river that feeds the Paro Valley, on 18 October 2010 at 2014hr in Paro.According to him, the species was found resting on the sandy bank of Pachu, the main river that passes through the Paro Valley.There are Sikkim False Wolf Snake Dinodon gammiei (Blandford, 1878): Gammie's Wolf Snake, Dinodon gammiei, is a nonvenomous species of snake first reported from Sikkim, India.A juvenile (Images 23 & 24) was found inside an animal rescue shed in Chukha Village, Paro District by an American animal rescuer Jamie Vaughan on 21 April 2013.The species identity was confirmed by Abhjit Das, an Indian herpetologist and Professor Indarneil Das.The latest reports of the presence of the species in northeastern India include that of Mistry et al. (2007) and Chettri & Bhupathy (2009).The supposed to be rare species may be present in good numbers in Bhutan.
Figure 1 .
Figure 1.Map showing the data collection areas.© Jigme Tshelthrim Wangyal , as well as open grassy wetlands, rice paddies, river floodplains, and swamps in deciduous forests.The species is known from Bangladesh, Cambodia, China, Hong Kong, Lao People's Democratic Republic, Myanmar, Taiwan, Thailand, and Vietnam.Assam Forest Frog Sylvirana leptoglossa (Cope, 1868): A specimen (Image 16) was collected and photographed from a place called Dungkarling, Image 11. Xenophrys major, Choetenkora, Trashiyangtse.© Jigme Tshelthrim Wangyal Image 12. Xenophrys glandulosa, Choetenkora, Trashiyangtse.© Jigme Tshelthrim Wangyal Image 13. Xenophrys minor, Serzhong, Mongar.© Karma Wangdi Phuntshothang, SamdrupJongkhar District with a SVL of 70mm on 23 July 2011 at 2224hr at an elevation of 150m.IUCN considers it as a species of Least Concern.Looking at the photograph, fingers and toes in order of length are 3>4>1>2 and 4>5>3>2>1, respectively (after Lalremsanga et al. 2007).Found in tropical swampy forests from 40-300 m (Ahmed et al. 2009) the species is known to feed on insects and flies.The place where this species was located is not far away from the Indian state of Assam and the vegetation type, altitude and other ecological attributes would not differ much.Outside Bhutan, this species is known to occur in India (Assam, Meghalaya and Mizoram), Bangladesh, Myanmar and Thailand.It is a new record for Bhutan.
lots of unconfirmed reports of the species presence in Thimphu, Punakha and Wangdi Phodrang districts.It is mainly found in India, Nepal, Bangladesh, Pakistan and China.
|
2018-12-29T22:01:48.215Z
|
2013-09-26T00:00:00.000
|
{
"year": 2013,
"sha1": "208a4abe9c780b026ffdcfa07481c2057f9a334f",
"oa_license": "CCBY",
"oa_url": "https://threatenedtaxa.org/JoTT/article/download/1512/2769",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "208a4abe9c780b026ffdcfa07481c2057f9a334f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
27292669
|
pes2o/s2orc
|
v3-fos-license
|
Hypothesis Designs for Three-Hypothesis Test Problems
As a helpful guide for applications, the alternative hypotheses of the three-hypothesis test problems are designed under the required error probabilities and average sample number in this paper. The asymptotic formulas and the proposed numerical quadrature formulas are adopted, respectively, to obtain the hypothesis designs and the corresponding sequential test schemes under the Koopman-Darmois distributions. The example of the normal mean test shows that our methods are quite efficient and satisfactory for practical uses.
Introduction
In practice, the multihypothesis test problems are of considerable interest in the areas of engineering, agriculture, clinical medicine, psychology, and so on.For instance, the multihypothesis tests are involved in pattern recognition 1-4 , multiple-resolution radar detection 5-7 , products' comparisons 8, 9 , and information detection 10 .Before the inspections, the hypotheses must be determined according to such practical needs as the balance of risks and costs.As Wetherill and Glazebrook 11 pointed out, combinations of hypotheses, risks, and costs may need to be tried iteratively until an acceptable design is attained.This bothers and burdens the practitioners.
To avoid too many troublesome trials and to produce the hypotheses directly, we discuss the hypothesis designs under the controlled risks and expected costs in this paper.As an initial exploration, only the three-hypothesis test problems are considered here.Indeed, our methods may extend to the multihypothesis cases.
In practice, test costs are mainly determined by sample sizes.Therefore, the sample size becomes an issue relating to the statistical analysis of problems in many aspects; see for example, Chen et al. 12 , Oliveira et al. 13 , Li and Zhao 14 , Li et al. 15 , Bakhoum and Toma 16 , Cattani 17 , as well as Cattani and Kudreyko 18 .Accordingly, we consider the Average Sample Number ASN , which is one of the most important values in evaluating the expected costs of sequential test schemes.
In the three-hypothesis test problem, the null hypothesis is always set as a standard and medium status.For example, Anderson 8 discussed the three-hypothesis test problem to decide whether the difference of two yarns' strength is zero the null hypothesis , positive or negative.Realistically, the standard and medium status denoted as θ 0 is definite, while the two alternatives beside it need to be designed to balance the risks and costs.Thus, in this paper, we try to design the alternatives θ −1 and θ 1 θ −1 < θ 0 < θ 1 under the required error probabilities and ASN for testing the parameter θ of the Koopman-Darmois distribution To simplify the discussion, we only consider the designs of the two alternative hypotheses symmetric with the null hypothesis, that is, θ 1 − θ 0 θ 0 − θ −1 k > 0 .Actually, the asymmetric designs may be obtained by extending our methods slightly.
Then, the test problem here is For the multihypothesis test problems, Armitage 19 provided a classical test scheme by simultaneously applying the method of Sequential Probability Ratio Test SPRT on each pair of the hypotheses.This test scheme pattern is simple and easy to implement.When testing the three hypotheses for the Koopman-Darmois distribution 1.1 , Armitage's scheme may be illustrated as in Figure 1, where AL//CM are boundaries for "θ θ 1 versus θ θ 0 " and CP//DQ are for "θ θ 0 versus θ θ −1 " when the boundaries for "θ θ 1 versus θ θ −1 " are encircled by AL and DQ and thus are neglected.According to Figure 1, the decision rule should be Continue sampling without any decision, otherwise, 1.3 where T n n i 1 X i and X 1 , X 2 , . . .are independent sequential observations from a Koopman-Darmois distribution.
For the given θ −1 , θ 0 , and θ 1 , the test scheme in Figure 1 is decided by 6 parameters n 0 , a, c, d, ψ, ϕ .ψ and ϕ may be determined according to Armitage 19 In the three-hypothesis test problems, the error probabilities α and β should be assigned to the error probabilities γ 1 , γ 2 , γ 3 , γ 4 in correspondence with the requirements
And the request on the ASN should be where N > 0 is a provided integer and θ ASN ∈ Θ is the point at which the ASN needs to be controlled.θ ASN may take values of θ −1 , θ 0 , θ 1 , and so on according to practical needs.Then, under the constraints 1.4 and 1.5 , we may find the proper k, n 0 , a, c, d by virtue of their relationships with the error probabilities and ASN.
Unfortunately, however, to the best knowledge of the authors, the accurate formulas for the performances of the three-hypothesis test scheme are still unavailable possibly because of its sequential feature and anomalistic continuing sampling area.In the following, the hypothesis designs and the test scheme parameters are determined under the required error probabilities and ASN in terms of some approximate expressions, that is, the asymptotic formulas and the proposed numerical quadrature formulas.
Designs under Asymptotic Formulas
In this section, we try to find the hypothesis designs and test schemes under the required error probabilities and ASN by virtue of the asymptotic formulas of the multihypothesis sequential test scheme by Dragalin et al. in 22,23 .Firstly, we discuss how to control the error probabilities.Let C i be the critical value of the logarithmic likelihood ratio function for accepting θ i , and let R i be the probability limit of incorrectly accepting θ i , i −1, 0, 1.According to Dragalin et al. 22 , under the condition of equal prior probabilities for the three hypotheses, the probability of wrongly accepting θ i for the Armitage 19 scheme may be controlled by R i if the critical value C i is set as Thus, the error probabilities γ 1 , γ
2.2
Note that the expressions in 2.2 define the relations between the hypothesis design parameter k and the test scheme parameters n 0 , a, c, d , while k has not been determined so far.
In the following, the hypothesis design parameter k is found with the help of Dragalin et al.'s asymptotic ASN formulas 23 .
Based on the nonlinear renewal theory, Dragalin et al. 23 summarized and developed the asymptotic ASN formulas under max{α, β} → 0. Specifically, when θ 1 − θ 0 θ 0 − θ −1 , the asymptotic ASN formulas under the two alternatives θ −1 and θ 1 are where And for the null hypothesis θ 0 under where .5641895835 here , and v is the value related to the covariance of the logarithmic likelihood ratio functions.
Notice that the approximate ASN formulas 2.3 and 2.4 only depend on the hypothesis design parameter k when θ 0 is given.Therefore, to find the proper hypothesis design under the desired number N, we set up an equation about k to meet the ASN requirement on one of the three hypothesis values, that is, where θ ASN may be θ −1 , θ 0 , or θ 1 .
Then, the hypothesis design parameter k is the solution to 2.5 and the test scheme with n 0 , a, c, d may be obtained correspondingly according to 2.2 .Illustrations are provided in Example 1 for testing the normal mean with the variance known.
Accordingly, we have In this example, the test scheme parameters should be And for the normal distribution N μ, 1 , there are where φ • and Φ • are the probability density function p.d.f. and cumulative distribution function c.d.f. of the standard normal distribution, respectively.
Consider the following 4 cases, respectively:
2.8
Then, solving 2.5 , we obtain the hypothesis designs k as shown in Column 2 of Tables 1, 2, 3, and 4. The corresponding test scheme parameters n 0 , a, tan ψ from 2.6 are listed in Columns 3-5 of Tables 1-4.To evaluate the method's efficiency, we record the Monte Carlo simulation study results with 1,000,000 replicates in Tables 5, 6, 7, and 8, where ASN μ ASN is the simulated value of ASN μ ASN and ε is the relative difference between ASN μ ASN and N. Note that the simulated probabilities under μ −1 are neglected here since they are nearly equivalent to their counterparts under μ 1 in terms of the schemes' symmetry.
Obviously, the accuracy of the solution k to 2.5 is decided by the efficiency of the ASN formulas 2.3 and 2.4 .On one hand, from Dragalin et al. 23 and the ε's in Tables 5-8, we conclude that the formulas in 2.3 for ASN θ −1 and ASN θ 1 are more efficient than the one in 2.4 for ASN θ 0 when testing the normal mean.On the other hand, the asymptotic ASN formulas perform better under smaller error probabilities since the asymptotic limit is taken as max{α, β} → 0. For applications, with such a simple computation, the efficiency of the design is quite satisfactory for small error probabilities conditions.However, this method may only serve to control the ASN on the three hypothesis values since the asymptotic ASN formulas out of these points are absent so far.And the quantities D θ i , O θ i i −1, 0, 1 , and v should be deduced according to specific distributions see 23 .Besides, the discrepancies between the real performances and the required ones show the method's conservativeness.In the next section, an improved method is proposed and more efficient formulas are developed through the numerical quadrature.
Designs under Numerical Quadrature Formulas
This section proposes a method to obtain more efficient hypothesis designs and test schemes through a system of equations based on the numerical quadrature formulas of the error probabilities and ASN.
In studies by Payton and Young in 20, 21 , for the provided hypotheses, the error probabilities γ 1 , γ 2 , γ 3 , γ 4 are approximately attained by solving a system of equations about the 4 scheme parameters n 0 , a, c, d .This method is hoped to fully make use of the required error probabilities and to obtain efficient designs.Enlightened by Payton and Young, we propose to find the hypothesis design and test scheme by solving the following system of equations: ASN θ ASN N.
3.1
Obviously, the key is to find the formulas of the error probabilities and ASN on the left side of the equations in 3.1 .Unfortunately, the available approximate formulas cannot meet applicable needs well.For example, Payton and Young 20, 21 adopted the formulas under the continuous-time process and the required minimum sample size before decisions, and obtained some inefficient results.Also, as mentioned in Section 2, Dragalin et al.'s results are restricted to the conditions of small error probabilities and θ ASN θ i i −1, 0, 1 22, 23 .
To find efficient and applicable designs, we develop the approximate formulas through the numerical quadrature for the three-hypothesis test scheme's performances on the error probabilities and ASN.
To deduce the formulas for the realistic discrete-time situation, we denote n t as the minimum integer that is not less than n 0 .Let L j and U j be the values on the two boundaries DQ and AL in Figure 1 at n j, that is, L j d j tan ϕ, U j a j tan ψ, j 1, . . ., n t .Denote c L c n t − n 0 tan ϕ, c U c n t − n 0 tan ψ, a a n t tan ψ, and d d n t tan ϕ.With the decision rule 1.3 , we rewrite the system of 3.1 as , the point of accepting H 1 or H −1 when n ≤ n t .N 1 θ is the average sample number from a point in c U , a at n n t to the point of making a decision when n > n t .And N −1 θ is the average sample number from a point in d , c L at n n t to the point of making a decision when n > n t .
The following theorem provides the approximate formulas through the numerical quadrature for the quantities in 3.2 .In fact, these formulas are developed by a stepwise dealing for the continuing sampling area before n 0 and the results by Li and Pu in 24 for the parallel lines areas inside AL//CM and inside CP//DQ, respectively.With such an idea, the proof of Theorem 3.1 is trivial and is neglected here.Theorem 3.1.Assume that X 1 , X 2 , . . .are i.i.d.observations.Let f θ x and F θ x be the p.d.f. and c.d.f. of X, respectively.Assume that F − θ x P θ X < x .Denote g 1θ x f θ x , and g j 1θ x , where u j i is the ith numerical quadrature root for L j , U j , i 1, . . ., m, j 1, . . ., n t − 1, and ω u is the corresponding weight for the numerical quadrature root u.Let u n t i and u n t i be the ith numerical quadrature root for c U , a and for d , c L , respectively, i 1, . . ., m.
Then, the approximate values
3.10
where 10 Notice that the values on the left side of the equations in 3.2 must be obtained through a computer program with much iterative work, which reveals the method's complexity in computation and impairs the speed of solving the system of 3.2 .Nevertheless, the time it costs is tolerable when the accuracy of solving the equations is not too demanded.
Example 1 Continued .Consider the same problems as those in Example 1 in Section 2. By applying the formulas 3.3 -3.12 and the 64 Gaussian quadrature roots, we solve the system of 3.2 in a computer program.The hypothesis designs and the corresponding test schemes are listed in Columns 6-9 of Tables 1-4.As a comparison with the method under the asymptotic formulas in Section 2, ε k in Column 10 of Tables 1-4 records the relative difference between the two hypothesis designs of the two methods.The Monte Carlo simulation study with 1,000,000 replicates in Tables 9, 10, 11, and 12 reveal the schemes' real performances.
The real performances in show that the requirements on controlling the error probabilities and ASN may be fully made use of under this method and the numerical quadrature formulas are almost accurate.Therefore, the hypothesis designs and test schemes are highly efficient in terms of, for instance, more efficient designs with smaller k in Tables 1-4 under this method.
To further explain the methods, an example of the airbag quality inspection is provided in the appendix.
Conclusions and Remarks
For the three-hypothesis test problems, the methods of designing the hypotheses, together with obtaining the corresponding test schemes, are proposed by adopting asymptotic formulas or numerical quadrature formulas in this paper.As a helpful guide for practitioners, they aid to directly find proper hypotheses under controlled risks and costs in preventing from too many iterative trials on combinations of hypotheses to meet practical needs.
The asymptotic formulas and the numerical quadrature formulas are both alternative tools for the hypothesis designs.Several aspects should be considered when choosing between them in applications.1 The method with numerical quadrature formulas outperforms the one under the asymptotic formulas especially when the error probabilities are not very small, as the example shows.In reality, the required error probabilities always range from 0.05 to 0.30 in sequential inspections, which seems to suggest choosing the numerical quadrature formulas to obtain efficient designs.
2 In computation, the asymptotic formulas provide great convenience for applications, while the numerical quadrature formulas demand much iterative computational work especially when the number of numerical quadrature roots is large.But from the computation with the 64 Gaussian quadrature roots in the example, the time it costs in a common computer is tolerable if the start values for the system of equations are proper.We recommend finding the designs under the asymptotic formulas first, and then apply them as starts to obtain more efficient hypotheses from the numerical quadrature formulas when needed.
3 When adopting the asymptotic formulas, the expressions for the quantities D θ i , O θ i i −1, 0, 1 , and v should be developed for a specific distribution see 23 .For the use of numerical quadrature formulas, the quadrature roots may be particularly arranged to fit the support points in the discrete distributions e.g, see Reynolds and Stoumbos 25 .And for the θ ASN out of θ −1 , θ 0 , θ 1 , only the method with numerical quadrature formulas may take effect.
Actually, the two methods may apply to any distribution out of the Koopman-Darmois family.However, the test schemes under these distributions may be different from that in Figure 1, and the numerical quadrature formulas should be changed according to the test scheme patterns.
For the hypothesis designs asymmetric with the null hypothesis or the multihypothesis test problems, the methods proposed in this paper are still applicable by some extensions of adding more constraints on the designs.The hypothesis design problems under other requests, for example, under the desire of stopping sampling before a limit guaranteed by a provided probability, are still open to scholars and practitioners.Taking the simulated observations from N 0, 1 by Li et al. in 26 , we may reach a decision of accepting H 0 when T 3 −0.0642falls in c 3 − n 0 tan ϕ, c 3 − n 0 tan ψ −0.7975, 0.7975 according to the test process in Table 13.
Under the method with Gaussian quadrature formulas, the hypothesis test problem should be H −1 : μ −1.3025 vs. H 0 : μ 0 vs. H 1 : μ 1.3025. A.2 Also taking the simulated observations from N 0, 1 by Li et al. in 26 , we may accept H 0 after inspecting the third airbag according to the test process in Table 14.
1
under the Koopman-Darmois distribution 1.1 , then the remaining 4 parameters n 0 , a, c, d form the scheme.Altogether with the hypothesis design value k in the test problem 1.2 , the 5 underdetermined values are k, n 0 , a, c, d .
Figure 1 2 , γ 3 , γ 4 are in control if we follow the critical values in 2.1 , where R −1 γ 4 , R 0 min{γ 1 , γ 2 }, and R 1 γ 3 .Setting the critical values C −1 , C 0 , C 1 equal to the corresponding logarithmic likelihood ratio functions, we have the following expressions for the test scheme parameters under the Koopman-Darmois distribution 1.1 :
Table 5 :
Simulated performances for the schemes under asymptotic formulas in Table1.
Table 6 :
Simulated performances for the schemes under asymptotic formulas in Table2.
Table 7 :
Simulated performances for the schemes under asymptotic formulas in Table3.
Table 8 :
Simulated performances for the schemes under asymptotic formulas in Table4.
Table 9 :
Simulated performances for the schemes under Gaussian quadrature formulas in Table1.
Table 10 :
Simulated performances for the schemes under Gaussian quadrature formulas in Table2.
Table 11 :
Simulated performances for the schemes under Gaussian quadrature formulas in Table3.
Table 12 :
Simulated performances for the schemes under Gaussian quadrature formulas in Table4.
Table 13 :
Test process under the test scheme from asymptotic formulas.
|
2017-07-31T01:24:05.492Z
|
2010-05-25T00:00:00.000
|
{
"year": 2010,
"sha1": "3bb3e62ebae5b76a851e103221257b584a3809b8",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2010/393095.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "99a6e1731458b832a5600c6c7d1ba514ed30ee70",
"s2fieldsofstudy": [
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
260826480
|
pes2o/s2orc
|
v3-fos-license
|
Fire Detection and Notification Method in Ship Areas Using Deep Learning and Computer Vision Approaches
Fire incidents occurring onboard ships cause significant consequences that result in substantial effects. Fires on ships can have extensive and severe wide-ranging impacts on matters such as the safety of the crew, cargo, the environment, finances, reputation, etc. Therefore, timely detection of fires is essential for quick responses and powerful mitigation. The study in this research paper presents a fire detection technique based on YOLOv7 (You Only Look Once version 7), incorporating improved deep learning algorithms. The YOLOv7 architecture, with an improved E-ELAN (extended efficient layer aggregation network) as its backbone, serves as the basis of our fire detection system. Its enhanced feature fusion technique makes it superior to all its predecessors. To train the model, we collected 4622 images of various ship scenarios and performed data augmentation techniques such as rotation, horizontal and vertical flips, and scaling. Our model, through rigorous evaluation, showcases enhanced capabilities of fire recognition to improve maritime safety. The proposed strategy successfully achieves an accuracy of 93% in detecting fires to minimize catastrophic incidents. Objects having visual similarities to fire may lead to false prediction and detection by the model, but this can be controlled by expanding the dataset. However, our model can be utilized as a real-time fire detector in challenging environments and for small-object detection. Advancements in deep learning models hold the potential to enhance safety measures, and our proposed model in this paper exhibits this potential. Experimental results proved that the proposed method can be used successfully for the protection of ships and in monitoring fires in ship port areas. Finally, we compared the performance of our method with those of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved.
Introduction
Around 71% of the Earth's surface is covered by oceans, and the enormous water area produces natural canals [1]. Ships are the oldest means of transportation and have helped mankind in various matters of life by using water. Since the 15th century, the fast expansion of shipping has made it possible for humans to move between continents with the main purpose of transporting goods and travelers, and the massive exchange of personnel and things has drastically affected the social and natural landscape. Ships have evolved with the passage of time. Advancements in technology have brought a whole new potential to the shipping experience and made it more reliable for humans to work through sea routes.
While shipping is the most effective use of transportation, it also brings major hazardous threats to the lives on ships. Among all the dangers that come with the shipping experience, the threat of fire is one common occurrence. Passengers and crew onboard are more exposed to danger if the cargo includes highly inflammable material such as oil, gas, coal, or wood. If any of these items catch fire, the results would be catastrophic.
Additionally, a fire accident on a ship is highly likely to be fatal to human life because it is difficult to receive fire suppression support from the outside due to the nature of the closed and isolated space of the sea, and it is necessary to extinguish the fire with a limited amount of personnel and equipment [2].
The key factor which results in escalating ship fires is a lack of awareness and prior safety measures. Having no fire prevention and detection measures leads to some unpleasant incidents. Hence, installing such an efficient and reliable system to detect a fire at its early stage is crucial.
To prevent the expansion of a fire, two methods are commonly used: (1) Traditional fire alarm system; (2) Fire detection system based on computer vision. A fire alarm system comprises physical sensors such as flame detectors, thermal detectors, smoke detectors, etc. The main drawback of this type of system is that it involves human intervention to check and approve the evaluation of a fire. In addition, it uses a variety of tools to detect the fumes, smoke, and intensity of a fire. These sensors can detect a fire when it evolves, and fumes and smoke lead to flames, but it is risky to let a fire expand to such an extent that it can lead to serious damage. A 10 min delay in putting out a fire in the engine room may cost USD 200,000, while a 20 min delay may cost up to USD 2,000,000 [3].
The alternative to fire alarm systems is AI-based fire detection. Lately, the use of deep learning algorithms has found its way into detecting fires through images. Recent research has proved the effectiveness of computer vision-and deep learning-based methods for fire detection [4,5]. Deep learning target detection can automatically extract image details and features, effectively overcoming the redundancy and interference caused by the manual extraction of image features for fire detection [6]. Fire detection is a more challenging task and needs proper, up-to-date technology to alert the crew. To overcome this issue, one of the most popular OS projects in computer vision is used, named YOLO (You Only Look Once). YOLO is an efficient real-time object detection algorithm that divides an image into a grid system, and each grid detects objects within itself. It can be used for real-time inference and requires very few computational resources [7]. YOLOv6 has improved object detection accuracy, particularly for recognizing small objects. YOLOv6 is an innovative system for real-time object detection based on deep learning [4]. However, due to more power consumption and being computationally expensive in nature, it comes with some limitations, which were addressed in YOLOv7. YOLOv7's basic premise is to enhance detection accuracy and performance while simultaneously minimizing the number of parameters and amount of processing required. However, when it comes to the detecting layer, YOLOv7 employs not one, but two heads: the lead head and the auxiliary head. Because of their interaction, these two layers give a more detailed portrayal of the data's correlation and distribution [4]. Fire detection is the most challenging process because of ambiguous nature of it; the following are some of the most important advantages of the proposed strategy: i.
We will publish a dataset for fire detection that will be used to detect fires in both daytime and nighttime scenarios. Fire and flame predictions will be précised, and overfitting will be minimized as a deep CNN (convolutional neural network) learns from a vast database of fire detection images. ii. We provide a YOLOv7-based active method of fire detection to strengthen the protection and to eschew long operations. iii. While rotating fire datasets by 25 degrees, a mechanism was devised to mechanically reorder flagged containers. iv. During the training phase of YOLOv7, class predictions were generated utilizing independent logistic classifications and a binary cross-entropy loss. This is far faster than other detecting networks [4]. v.
In order to decrease the number of false positives in the fire recognition method, we utilized photos that resembled fire and excluded low-resolution photographs. Additionally, even in tiny fire zones, the suggested approach considerably improves accuracy and lowers the percentage of false detections.
YOLOv7 incorporates several new features, including a modified backbone network, improved feature fusion techniques, and more efficient training and inference processes. These modifications result in improved accuracy and faster inference times as compared to YOLOv4. In particular, the YOLOv7 model has shown superior performance in detecting small objects, making it well-suited for the detection of fires on ships, which are often localized and relatively small.
Related Work
Most of the object-detecting and object-recognizing algorithms depend on a particular type of deep neural network (DNN) and CNN. Learnable neural networks comprise various layers to perform object detection. Each layer is responsible for performing different tasks such as analyzing the areas, extracting features of the data obtained, identifying data, and detecting any anomalous behavior. Traditional fire detection methods were lacking in speed and accuracy and suffered from performance degradation. Deep learning fire detection techniques have emerged in the past decade, among which YOLO algorithms have aided in solving the major object detection problems. Development of YOLO's framework is highly based on improvements in the upcoming models. From the original YOLOv1 to the latest YOLOv8 algorithms, the model's performance with key innovations and differences has evolved to accomplish detection tasks.
Although YOLOv6 and YOLOv7 have their unique features and limitations, they share some common traits. For instance, they use deep convolutional neural networks as the backbone architecture, adopt a one-stage object detection paradigm, and employ modern optimization techniques such as batch normalization and adaptive moment estimation. However, they also differ in some respects; for example, YOLOv6 and YOLOv7 use anchorbased prediction and anchor-free prediction, respectively. YOLOv7, on the other hand, is a more lightweight version of YOLOv6 that addresses the computational and memory limitations of YOLOv6 [8]. YOLOv7 has shown promising results in fire detection, but it needs to be trained more accurately for rare situations like a fire in the engines. Its high accuracy, speed, and ability to detect small objects make it well-suited for the task at hand. The following are the related state-of-the-art works in the field of fire detection and object detection.
Traditional Fire Detection Techniques
A typical fire detection system onboard a ship involves sensors (fire/smoke/heat) and an alarm panel [9]. Fire detectors are designed to provide a visible and audible alarm on the vessel to indicate the location of a fire. The detectors throughout the ship are wired to a fire control panel that provides visual and auditory alerts and possibly alarms in other parts of the vessel as well. The authors in [10] proposed a ship fire monitoring and alarm system using CAN bus technology. Some types of detectors may sense a rapid rise in temperature in a brief period and then alert the ship, while others may detect fires on a visual basis such as smoke or fire on a camera system to set off said alarm. Traditional fire detection systems involve the need for physical sensors that require human intervention to confirm the occurrence of a fire. Several other tools are incorporated with sensing devices to detect fire, flames, and smoke. However, these detectors are inefficient, as they cannot distinguish between smoke and fire, thus resulting in false alarm generation.
Different Fire Detection Methods Using Deep Learning Algorithms
With the growth of AI, numerous research attempts have been made to detect the presence of fire/smoke in images using machine learning and deep learning models. In a range of computer-based vision applications, such as visual recognition and image classification, the introduction of CNNs has resulted in significant performance gains [11]. The convolutional neural network (also known as CNN or ConvNet) is one of the most popular deep neural networks in deep learning, especially when it comes to computer vision applications. It uses a special technique called convolution. In [12], detection of fire and smoke through images and videos using deep learning algorithms is performed, including a CNN-based architecture to train a model with many images for a dataset. Dilated convolutional layers have been built to avoid depth of learning, which means to learn larger data by ignoring the minute details. Classifying smoke and fire to reduce false fire alarms is accomplished successfully in this research work. Differentiation between smoke and fire in images and videos is accomplished by using a dilated CNN to learn the robustness of features from the scene. The experimental results indicate that the proposed method performs slightly better than well-known neural network architectures on their custom dataset. However, the main limitation is that errors occur when pixel values come closer to those of background and edges or are not detected by the CNN. Because it is a custom-built dataset, it is computationally expensive. In [13] a deep learning-based fire detection system called Detection and Temporal Accumulations (DTA) is used that imitates the human detection process to improve the accuracy of fire detection while reducing false detections and misinterpretations. The proposed method successfully interprets the temporal SRoF behaviors and improves the fire detection accuracy. The faster R-CNN model is used, which can detect multiple objects in a frame. It can detect fire, flames, and smoke in a frame. Long short-term memory (LSTM) to accumulate the temporal behaviors and to decide whether there is a fire or not in a short-term period is also used. In [14], the authors designed a lightweight convolutional neural network for early fire and smoke detection which successfully achieved competitive accuracy. It uses two satellite imagery datasets and three smoke-related scene classes, namely, "Smoke", "Clear", and "Other aerosol". This model needs more improvements, as some of the smoke patches were misinterpreted as "Clear" or "Other aerosol", which is troubling for the early prediction of fires [14].
Fire Detection Using YOLO (You Only Look Once) Algorithms
In [15], the YOLOV3 algorithm is used for a small-scale flame detection method. This method was proposed to achieve the detection of different scales of flames using an improved K-means clustering algorithm. The authors of [16] introduce a fire-YOLO algorithm. It adds depth-separable convolution to YOLOv4 and helps to reduce the computational costs of the model and improve the perceptual field of the feature layer by using a cavity convolution method. The authors of [17] proposed a fire detection technique for urban areas using ELASTIC-YOLOv3 as an improvement on YOLOv2 to amplify the performance without introducing more parameters. Traditional fire algorithms, especially the ones for nocturnal fire detection, suffered from issues like high light intensity, lack of color information, changes in shapes and sizes of flames, etc., for which more advanced and improved real-time fire detection and recognition systems came with modified YOLO algorithms (v4, v5), as proposed in [18,19].
Fire Dataset Description
To train the model, we collected a diverse range of images of fires from various internet sources, including some videos to increase the size of dataset. For this purpose, images obtained from distinct angles, focal lengths, and brightening conditions were utilized in our dataset to elevate the accuracy of the system. Even after exploring different resources, the images were not enough for the dataset, so additionally, images from publicly accessible dataset platforms such as Roboflow and Kaggle were included to broaden the dataset, as shown in Table 1. The illustration features fire items, flames, and burning displays. The dataset's diversity strengthens the model's ability to generalize unseen or unexpected data and adapt to changing conditions. Containing both diurnal and nocturnal images, our dataset reached a total of 4186 images of fires (Figures 1 and 2). the images were not enough for the dataset, so additionally, images from publicly accessible dataset platforms such as Roboflow and Kaggle were included to broaden the dataset, as shown in Table 1. The illustration features fire items, flames, and burning displays. The dataset's diversity strengthens the model's ability to generalize unseen or unexpected data and adapt to changing conditions. Containing both diurnal and nocturnal images, our dataset reached a total of 4186 images of fires (Figures 1 and 2). Another 436 images of non-fire scenarios were added to expand the diversity of the dataset for more accurate results. Altogether, the dataset contains 4622 images from both the fire and non-fire categories ( Table 2). They are divided as follows: 70% for training, 10% for testing, and 20% for validation. For the test dataset, we tried to accumulate as many realistic images as possible because the fire detection unit must ultimately work in these situations only. Because our training dataset is already diverse enough, we used these realistic images for testing purposes only [20]. the images were not enough for the dataset, so additionally, images from publicly accessible dataset platforms such as Roboflow and Kaggle were included to broaden the dataset, as shown in Table 1. The illustration features fire items, flames, and burning displays. The dataset's diversity strengthens the model's ability to generalize unseen or unexpected data and adapt to changing conditions. Containing both diurnal and nocturnal images, our dataset reached a total of 4186 images of fires (Figures 1 and 2). Another 436 images of non-fire scenarios were added to expand the diversity of the dataset for more accurate results. Altogether, the dataset contains 4622 images from both the fire and non-fire categories ( Table 2). They are divided as follows: 70% for training, 10% for testing, and 20% for validation. For the test dataset, we tried to accumulate as many realistic images as possible because the fire detection unit must ultimately work in these situations only. Because our training dataset is already diverse enough, we used these realistic images for testing purposes only [20]. Another 436 images of non-fire scenarios were added to expand the diversity of the dataset for more accurate results. Altogether, the dataset contains 4622 images from both the fire and non-fire categories ( Table 2). They are divided as follows: 70% for training, 10% for testing, and 20% for validation. For the test dataset, we tried to accumulate as many realistic images as possible because the fire detection unit must ultimately work in these situations only. Because our training dataset is already diverse enough, we used these realistic images for testing purposes only [20]. Due to several circumstances, including a known common component, there is a high likelihood of overfitting, including a lack of data to adequately capture all potential input conditions. Applying data augmentation techniques to expand the training dataset is an efficient way to combat overfitting. Augmentation includes applying different geometric transformations and distortions to the images such as scaling, rotation, random crops, vertical flipping, horizontal flipping, and contrast enhancement to increase the variety of images for model training and improve the determined accuracy. The size and resolution of the images directly impact the efficiency of the trained model. In addition, one of the biggest problems in object detection is different weather conditions or low model performance in some situations (sun reflection, lack of light, etc.) [21]. Therefore, it is significant to apply data augmentation techniques to expand the dataset for model training [22][23][24][25]. By applying data augmentation techniques to the images, we perform rotation and scaling on each image, which doubled the number of obtained images. The aim is to train the model to detect fires, which requires the use of a large number of fire images to enhance the performance. Augmenting the data makes it possible for the object detection system to detect and recognize objects from different perspectives to achieve the maximum performance and accuracy of the model.
Setting up the original dataset during data collection is followed by data annotations. It is an important step resulting in efficient performance of the trained model. If the bounding boxes around the classes of interest were too loosely defined, this could force the models to generalize on a false assumption. Conversely, very stringent bounding boxes could result in missing a section of the relevant class, again leading to the risk of false generalization during training [3]. To surmount these two risks, we used data annotations based on closed proximity.
Methodology
YOLO models not only perform object detection and recognition but are also used for instance segmentation, semantic segmentation, and pose tracking by dividing images into a grid and creating bounding boxes around predicted probabilities of classes in each cell of the grid. The evolution of YOLO models is typically based on the accuracy and speed of the algorithm. The YOLOv7 method has much better performance and has achieved great success in the 5 FPS-to-160 FPS range, exceeding the speed and accuracy of currently known target detectors [26].
YOLOv1, the original and earliest YOLO model, was introduced to detect real-time objects but was lacking in speed and detection of small objects. YOLOv2 helped in achieving faster inference times with more advanced extraction of features, but anchor boxes were of the same size for every object. YOLOv3 brought advancement by allowing scaled anchor boxes for the respective sizes of objects, along with improved speed and accuracy, but suffered in detecting small objects and had higher memory requirements. With advanced data augmentation techniques, YOLOv4 brought improvements to maintain real-time performance. While it is common to compromise on accuracy to improve the speed of a model, YOLOv5 came with the aim of maintaining accuracy with an increase in speed by using advanced techniques of training such as focal loss, label smoothing, and multi-scale training. YOLOv6 successfully achieved stronger performance against the MS COCO dataset. It is more efficient than all previous versions of YOLO in terms of accuracy. It has introduced a new technique to generate boxes called "Dense anchor boxes".
YOLOv7, on the other hand, outperforms its predecessors with an efficient backbone network (E-ELAN) and an improved feature fusion technique (Figure 3). For the detection and recognition of small objects, YOLOv7 is preferrable to YOLOv6 in compound environments. YOLOv7 proposed a couple of architecture changes and a series of bagof-freebies methods, which increased the accuracy without affecting the inference speed, only the training time [27]. It combines an attention mechanism and a re-parameterization convolutional structure [28]. YOLOv7 is also inspired by re-parameterized convolutions (RepConv), just like YOLOv6. It amalgamates different convolutional modules into a single inference degree. This technique is split into two categories: the model-level ensemble, which trains multiple models of the same nature with different training data, and the module-level ensemble technique, which has gained more popularity due to its performance as a weighted average on the weights of models at various iterations. However, some of the re-parameterization techniques are architecture-specific, meaning they only work with specific architectures. YOLOv7 is a solution to overcome the previous method's drawbacks. It utilizes gradient flow propagation paths for determining the segments (modules) within the overall model that requires re-parameterization [29]. bies methods, which increased the accuracy without affecting the inference speed, only the training time [27]. It combines an attention mechanism and a re-parameterization convolutional structure [28]. YOLOv7 is also inspired by re-parameterized convolutions (RepConv), just like YOLOv6. It amalgamates different convolutional modules into a single inference degree. This technique is split into two categories: the model-level ensemble, which trains multiple models of the same nature with different training data, and the module-level ensemble technique, which has gained more popularity due to its performance as a weighted average on the weights of models at various iterations. However, some of the re-parameterization techniques are architecture-specific, meaning they only work with specific architectures. YOLOv7 is a solution to overcome the previous method's drawbacks. It utilizes gradient flow propagation paths for determining the segments (modules) within the overall model that requires re-parameterization [29]. YOLOv7 uses an E-ELAN architecture as the backbone of the algorithm. E-ELAN uses expand, shuffle, and merge cardinality methods to achieve the ability to continuously enhance the learning ability of the network without destroying the original gradient path [8].
Different models of YOLOv7 include YOLOv7, YOLOv7-W6, YOLOv7-tiny, YOLOv7-X, YOLOv7-E6, and YOLOv7-D6. YOLOv7 is a basic model for ordinary GPU computing.YOLOv7-W6 is optimized for cloud GPU computing. YOLOv7-tiny is used for edge GPUs, while YOLOv7(X, E6, D6) are obtained from a compound scaling method. A major advantage of YOLOv7 over its antecedents is its speed and accuracy, which empower it to perform object detection more precisely and accurately, as shown in Figure 4. Unlike other state-of-the-art algorithms, YOLOv7 processes images at a speed of 155 frames/second, which is much faster than the earlier versions. It achieves 37.2% as its IoU on the MS COCO dataset. A comparison between the average precision (AP⁵⁰) values of different variants of YOLOv7 using the MS COCO dataset is given in Table 3. YOLOv7 uses an E-ELAN architecture as the backbone of the algorithm. E-ELAN uses expand, shuffle, and merge cardinality methods to achieve the ability to continuously enhance the learning ability of the network without destroying the original gradient path [8].
Different models of YOLOv7 include YOLOv7, YOLOv7-W6, YOLOv7-tiny, YOLOv7-X, YOLOv7-E6, and YOLOv7-D6. YOLOv7 is a basic model for ordinary GPU computing. YOLOv7-W6 is optimized for cloud GPU computing. YOLOv7-tiny is used for edge GPUs, while YOLOv7(X, E6, D6) are obtained from a compound scaling method. A major advantage of YOLOv7 over its antecedents is its speed and accuracy, which empower it to perform object detection more precisely and accurately, as shown in Figure 4. Unlike other state-of-the-art algorithms, YOLOv7 processes images at a speed of 155 frames/second, which is much faster than the earlier versions. It achieves 37.2% as its IoU on the MS COCO dataset. A comparison between the average precision (AP 50 ) values of different variants of YOLOv7 using the MS COCO dataset is given in Table 3.
Fire Detection Using YOLOv7
Among various metrics for evaluation such as average precision (AP), F1 scores, recall, and mAP, intersection over union (IoU) was used specially to demonstrate YOLOv7's object detection capacity. IoU is the measure of the amount of overlap between the detected object and the ground truth object [32]. The general equation for IoU is given as: It is standard to use IoU to gain insights about a model's overall performance in terms of localization. The YOLOv7 model performs object detection in a single stage. First, it separates the input image into N grids, all of same size. Every region of the image is analyzed to detect the classified object. In each grid, objects are predicted with bounding boxes along with their label and probability score to read the potential object's presence. Predicted objects in each grid are overlapped from the increasing predictions of the grid; thus, redundancy occurs. The YOLO architecture uses a mechanism to predict only objects of interest, called non-maximal suppression. For this, all those bounding boxes predicted with low probability scores are suppressed by comparing the decision with those of the largest probability score. Bounding boxes with largest intersection over union (IoU) with the highest-probability box are removed. This iteration continues until the desired box of highest probability is found, as shown in Figure 5. Object detection models require knowledge about the depth of the network, width of the network, and resolution of the trained network. YOLO, along with other object detection models, uses single-dimensional scale methods with high human adaptation, which could not scale up a desired dimension without changing the input and output channel of a transition layer. However, YOLOv7, unlike its peers, scales the depth and width of the network simultaneously while connecting layers together. This mechanism of YOLOv7 preserves the optimization of the model while scaling for various sizes. Object detection models require knowledge about the depth of the network, width of the network, and resolution of the trained network. YOLO, along with other object detection models, uses single-dimensional scale methods with high human adaptation, which could not scale up a desired dimension without changing the input and output channel of a transition layer. However, YOLOv7, unlike its peers, scales the depth and width of the network simultaneously while connecting layers together. This mechanism of YOLOv7 preserves the optimization of the model while scaling for various sizes.
Model Evaluation
We implemented and tested our model using Visual Studio 2022 C++ on our laptop with a CPU speed of 3.20 Hz, 32 GB RAM, and 3 GPUs. To test our ship fire detection model, we implemented it in different environments. The dataset was collected using various resources on internet and annotated in YOLO form. Both the YOLOv6 and YOLOv7 models were trained by setting suitable values of different parameters. Pytorch, a deep learning framework, was used to train the model with Google Colab Pro, an Nvidia A100 GPU, and 48 GB of memory, and various tests were conducted to validate the performance and effectiveness of the trained models. There are some metrics, such as accuracy, recall, F1 score, AP, and mAP, that are crucial to determine the validity of models. These are evaluation parameters in object detection processes. Accuracy is the closeness of a quantity's measurement value to its actual value. The ratio of true positives (TP) to all predicted positives is precision. Recall is the ratio of true positives (TP) to all ground truths. F1 score is calculated as the harmonic mean of the precision and recall values [33], which indicates better target detection accuracy [34]. The F1 score ranges between 0 and 1, with a higher value indicating better model performance, as detailed in these papers [35][36][37][38][39][40].
TP, TN, FP, and FN are terminologies used to represent the outcomes of a classification model's predictions compared to the ground truth labels. A true positive (TP) is the number of pixels belonging to a fire detected as positive by the model which the ground truth also labels as positive. A true negative (TN) is the number of pixels belonging to a non-fire detected as negative by the model which the ground truth also labels as negative. A false positive (FP) is the number of pixels classified as a fire detected as positive by the model but which the ground truth labels as negative. A false negative (FN) is the number of pixels classified as a non-fire detected as negative by the model but which the ground truth labels as positive (Table 4). The mAP is the mean AP used to measure the general detection accuracy of the target detection algorithm. In summary, for the YOLO algorithm, the AP and mAP are the best metrics to measure the detection accuracy of the model [41] as shown in Table 5. The mAP is the mean AP used to measure the general detection accuracy of the target detection algorithm. In summary, for the YOLO algorithm, the AP and mAP are the best metrics to measure the detection accuracy of the model [41] as shown in Table 5. Figure 6 shows model evaluation matrices curves. Mean average precision can be described as follows:
Analysis by Experiment
The experimental objective was to estimate the performance and concreteness of the system in effectively detecting fires on a ship by consolidating safety and early response abilities. To elevate the system's robustness and reliability, experiments were conducted in various defined conditions. We used a dataset of distinctive images of fires and flames from different sources, including publicly available datasets. The dataset was then annotated with bounding boxes, and labels were added. A high-performance platform along with an appropriate GPU was used for training. The evaluation of the model depends upon several performance metrics to measure the predicted probabilities correctly. These include precision, recall, F1 score, and accuracy to evaluate the overall detection performance. Our fire detection system has achieved a precision of 94%. This implies that 94% of fire instances were indeed actual fires, minimizing the rate of false positives lowering the chances of false alerts and alarms. The recall factor of the system was found to be 90%, implying that the system is sensitive enough to recognize and detect fires. The F1 score of the system is estimated to be 86%, which shows a balanced trade-off between the recall and precision, resulting in the overall efficiency of the system in detecting true fires. The proposed fire detection system accomplished an accuracy of 93%, which indicates that fire instances are detected correctly by the system in the test dataset. The experimental results indicate that the fire detection system for ships utilizing the YOLOv7 model performed effectively in detecting fires on ships. The high accuracy, precision, and recall values sum up the system's robustness and reliability in identifying fire and flames. The obtained F1 score further clarifies the model's accurate detection of fires. The precision-confidence curve is shown below in Figure 7. proposed fire detection system accomplished an accuracy of 93%, which indicates that fire instances are detected correctly by the system in the test dataset. The experimental results indicate that the fire detection system for ships utilizing the YOLOv7 model performed effectively in detecting fires on ships. The high accuracy, precision, and recall values sum up the system's robustness and reliability in identifying fire and flames. The obtained F1 score further clarifies the model's accurate detection of fires. The precision-confidence curve is shown below in Figure 7. The precision-confidence curve indicates the confidence score at each precision value. The above plot visually represents variations in precision of the detected fires with respect to confidence threshold set for fire detection. When the confidence threshold is set to as low as 0.1, the precision obtained for our model is 0.94, which illustrates that the detection of true positives is relatively high, and the system is efficient enough in detecting actual fires.
Performance of Model in Varying Ambient Lighting
The system is tested in real-world scenarios with varying light conditions. The model is not only trained for daylight fire detection, but is also trained for situations where the The precision-confidence curve indicates the confidence score at each precision value. The above plot visually represents variations in precision of the detected fires with respect to confidence threshold set for fire detection. When the confidence threshold is set to as low as 0.1, the precision obtained for our model is 0.94, which illustrates that the detection of true positives is relatively high, and the system is efficient enough in detecting actual fires.
Performance of Model in Varying Ambient Lighting
The system is tested in real-world scenarios with varying light conditions. The model is not only trained for daylight fire detection, but is also trained for situations where the lighting is comparatively darker. Hence, through experiments, the system responds well, performing in both day and night lighting conditions, which makes it more reliable.
Discussion
The success of the novel system is due to the YOLOv7 model's ability to localize fires and differentiate between fire and non-fire objects. The contribution of deep learning techniques united with the diverse dataset of 4622 images of 640 × 640 size has an immediate impact on the system's strong implementation. It is worth noting that a system's efficiency and performance may vary in various conditions such as lighting variations, smoke, variable sensitive environmental factors, and obstruction. Compared to other YOLO models, YOLOv7 has given higher-class results in detecting fires in bright and dark lighting, recognizing small fires and flames, and distinguishing fire and non-fire scenarios. The early detection of fires in ships is now improved with the proposed YOLOv7 system in real time.
Depending on weather, reflection, darkness, and sunlight, actual ship fire images can be dark and blurred. Table 6 compares the recently published fire detection methods with the proposed method.
Discussion
The success of the novel system is due to the YOLOv7 model's ability to localize fires and differentiate between fire and non-fire objects. The contribution of deep learning techniques united with the diverse dataset of 4622 images of 640 × 640 size has an immediate impact on the system's strong implementation. It is worth noting that a system's efficiency and performance may vary in various conditions such as lighting variations, smoke, variable sensitive environmental factors, and obstruction. Compared to other YOLO models, YOLOv7 has given higher-class results in detecting fires in bright and dark lighting, recognizing small fires and flames, and distinguishing fire and non-fire scenarios. The early detection of fires in ships is now improved with the proposed YOLOv7 system in real time.
Depending on weather, reflection, darkness, and sunlight, actual ship fire images can be dark and blurred. Table 6 compares the recently published fire detection methods with the proposed method.
Limitations
This research study proposed a fire detection system for ships utilizing the YOLOv7 algorithm which exhibits high performance and accuracy in the detection of ship fires. However, there are some limitations of the proposed strategy which bound its performance. It was observed during experimentation that some of the images containing fire-like objects were recognized as fire. If an image contains bright sunlight, intense yellowish red lights, or fire-like bulbs, then it will be detected as a fire, as shown in Figure 10. Moreover, the model detects red light with high illuminance as fire. These issues could be resolved by expanding the size of the dataset. We aim to retrain our model with a more diverse dataset to overcome these issues [45][46][47].
ship fires. However, there are some limitations of the proposed strategy which bound its performance. It was observed during experimentation that some of the images containing fire-like objects were recognized as fire. If an image contains bright sunlight, intense yellowish red lights, or fire-like bulbs, then it will be detected as a fire, as shown in Figure 10. Moreover, the model detects red light with high illuminance as fire. These issues could be resolved by expanding the size of the dataset. We aim to retrain our model with a more diverse dataset to overcome these issues [45][46][47].
Conclusions
In conclusion, this study proposed an improved and faster version of a fire detection system for ships using the YOLOv7 architecture. The thorough experiments and evaluation of the system demonstrate that the proposed system is highly efficient in detecting real-time fires in challenging environments. Our system's methodology comprises collecting a dataset with vast number of images of various fire scenarios and preprocessing of the collected dataset, which includes data augmentation techniques and model training. The evaluation of the model is compared with existing fire detection systems, and the results indicate that YOLOv7 exhibited high accuracy and ability to detect fires.
The obtained mAP indicates that the achieved performance of our trained model, using the YOLOv7 architecture, is highly effective and can be utilized for detecting real-time fires in maritime environments. Implementation of this model provides timely responses, allowing the mitigation of fires before they escalate. There is a significant contribution of YOLOv7 in leveraging the power of deep learning techniques to timely classify fire instances and prevent fire expansion. YOLOv7 outperforms its peers in small-target detections and escalated our model's capability of detecting minor fires.
Conclusions
In conclusion, this study proposed an improved and faster version of a fire detection system for ships using the YOLOv7 architecture. The thorough experiments and evaluation of the system demonstrate that the proposed system is highly efficient in detecting realtime fires in challenging environments. Our system's methodology comprises collecting a dataset with vast number of images of various fire scenarios and preprocessing of the collected dataset, which includes data augmentation techniques and model training. The evaluation of the model is compared with existing fire detection systems, and the results indicate that YOLOv7 exhibited high accuracy and ability to detect fires.
The obtained mAP indicates that the achieved performance of our trained model, using the YOLOv7 architecture, is highly effective and can be utilized for detecting real-time fires in maritime environments. Implementation of this model provides timely responses, allowing the mitigation of fires before they escalate. There is a significant contribution of YOLOv7 in leveraging the power of deep learning techniques to timely classify fire instances and prevent fire expansion. YOLOv7 outperforms its peers in small-target detections and escalated our model's capability of detecting minor fires.
Despite being able to minimize the risks of potential hazards and detecting fires at early stages, there are still areas of improvement. Our proposed system suffers in detecting fire smoke and lacks in detection of fire when there is a comparatively low level of ambient light. Future work on our model could emphasize broadening the dataset and including more images of fire scenarios and other fire-related factors to enhance the adaptivity of the model in various environments. Moreover, smoke detection by the system can be added to expand the implementation areas of the model.
|
2023-08-12T15:13:03.182Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "19c8081bb5aa02d751d298e6ea4368e6ddbdceae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/16/7078/pdf?version=1691734446",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2cfccf025513f5d37f5c6ea04ec28de0b8aa56a0",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
73158865
|
pes2o/s2orc
|
v3-fos-license
|
A CASE OF LONG TERM SURVIVAL IN WOMAN WITH METASTATIC BREAST CANCER TREATED WITH TRASTUZUMAB
Intravenous Trastuzumab is an effective treatment for metastatic breast cancer after failure of firstline chemotherapy for patients with human epidermal growth factor 2 (HER2) positive receptor. The aim of this study is to present of case of long time survival woman with metastatic breast cancer. The case is a 55years old female. She underwent left mastectomy with axillary lymphadenectomy for breast cancer. Histological examination showed invasive ductal carcinoma, grade III, estrogen and progesterone receptornegative, HER2positive receptor status. Radiotherapy and six courses with antracyclines were performed as adjuvant chemotherapy. One year after the operation she was diagnosed to have lung metastases. Treatment was initiated with Trastuzumab 8 mg/kg for loading dose and 4 mg/kg maintenance dose every week. Treatment was continued for more than two years. Control computer tomography indicates stable disease. No adverse events were reported for twenty four months of Trastuzumab treatment. Treatment was stopped due to patients withdrawn. Overall survival was 31 months. This case indicate that long term Trastuzumab would be an optimal treatment for HER2positive breast cancer patients.
INTRODUCTION
Breast cancer represents a heterogeneous array of different disease subtypes that have unique molecular phenotypes and distinct clinical features (1).Despite advances in the treatment of early-stage breast cancer, approximately one third of patients will eventually develop metastatic breast cancer (MBC) (2).The prognosis of patients with MBC is poor, with a median survival time of 26 months (3).Recently, advances in understanding the biology of breast cancer have led to the classification of breast tumors based upon their molecular features and the advent of targeted therapies for the treatment of both early and metastatic cancer.Targeted agents and their promise of better patient outcomes with respect to safety, survival, and quality of life may change the clinical course for many MBC patients.
The human epidermal growth factor receptor (HER)-2 gene is a member of a gene family that encodes for transmembrane receptor tyrosine kinases, including the epidermal growth factor receptor.Approximately 20%-30% of human breast tumors overexpress the HER-2 gene, and patients with HER-2-overexpressing tumors experience early progression and a poor prognosis in the metastatic setting (4,5).Trastuzumab, a humanized monoclonal antibody directed against the extracellular domain of HER-2, has been developed for the treatment of HER-2-overexpressing breast cancers.In the pivotal phase III randomized trials in patients with MBC, the application of trastuzumab after failure of first-line chemotherapy resulted in longer times to progression (TTP), higher response rates, and higher survival rates than with chemotherapy alone (6).However, HER-2-positive MBC is an aggressive disease, and despite these advances, the majority of patients, treated with trastuzumab-based regimens progress within one year, with only very few patients experiencing prolonged remission (7).
The case report presented here describes a woman who underwent a mastectomy for invasive ductal carcinoma and subsequently received trastuzumab-containing chemotherapy as treatment for a metastatic lesions in the lung.She experienced a stable disease and has been receiving maintenance trastuzumab for twenty-four months.
Case presentation
In February 2001, an otherwise healthy 55 -year-old Caucasian woman, with no history of hormone therapy, smoking, drinking, or a family history of breast cancer, presented with a lump in the center of her left breast.The axillary and neck lymph nodes were not palpably enlarged.After breast biopsy, radiography of the chest and ultrasound of the abdomen, the patient underwent a radical left mastectomy.Pathologic examination of the resected specimens diagnosed HER2-positive (immunohistochemistry 3+), hormone receptor-negative, grade III, invasive ductal carcinoma of the left breast with two positive axillary lymph nodes.The size of the primary tumor was 3,5 x 4,5 x 3 cm.She was treated with adjuvant chemotherapy, six cycles of epirubicin, 5-fluorouracil and cyclophospfamide (60 mg/m2, 600 mg/ m2 and 600 mg/m2, respectively).On completion of chemotherapy, radiation therapy was administered to the left breast.In November 2003, after CT, the patient presented with a single metastatic lesion (diameter 2,5 cm) in the at segment 7 of the right lung; no biopsy was carried out because patient was unwilling to undergo such a procedure.Trastuzumab (4 mg/ kg loading dose and 2 mg/kg maintenance dose weekly thereafter, with 21-days repetition) was started as first-line metastatic therapy in December 2004.Reevaluation of the lesion by radiography and CT followed regularly thereafter and showed a stable disease.The patient continued to receive maintenance Trastuzumab monotherapy (6 mg/kg every three weeks) for 24 months, and she remains in stable disease.Throughout this period, the patient was in good health and led an active life without significant adverse effects.After two years treatment with Trastuzumab she decided to stop the treatment despite being informed about the possible chance of relapse after Trastuzumab withdrawal.She continues his treatment with observation only.The overall survival was 31 months after start the Trastuzumab treatment.
DISCUSSION
Clinical management of MBC remains a significant therapeutic challenge as oncologists balance improvements in overall survival with patients' quality of life.Despite more than 30 years of research, MBC remains essentially incurable, with a median survival time of approximately two years (8).The prognosis is poorer in patients with HER-2positive MBC (4).Trastuzumab-based therapies have greatly improved the survival rates of these patients, with the largest benefits seen when treatment is continued at least until disease progression (9).However, if Trastuzumab is withdrawn, there is the possibility that disease may relapse.Preclinical data suggest that previously suppressed tumor growth resumes rapidly if Trastuzumab is withdrawn (10).Effective treatment of HER-2-positive disease therefore seems to require prolonged attenuation of HER-2-activity, and it is difficult to define a time point beyond which Trastuzumab might not offer additional benefit.Furthermore, evidence in the literature supports the idea that continuing anticancer treatments as maintenance therapy in patients with stable disease may prolong the disease-free interval (11).There is an increasing number of case reports describing patients who experienced long-term remission from HER-2-positive MBC while receiving Trastuzumab maintenance therapy (12,13).The duration of remission in these cases ranges from four months to eight years, and in all cases, maintenance therapy was based on Trastuzumab.
One of these cases also illustrates the risk of withdrawing Trastuzumab treatment when the patient had experienced three years of full remission in the liver but relapsed in the central nervous system within two months of withdrawal of trastuzumab maintenance therapy (13).An important concern of many clinicians regarding long-term use of Trastuzumab is cardiac tolerability owing to the unexpected high incidence of cardiac events reported by the early pivotal trials, particularly when associated with anthracyclines.It is difficult to compare trials with different end points and eligibility criteria; however, the understanding of Trastuzumab related cardiac events has since improved, and the majority of these events are manageable and reversible.Extending Trastuzumab treatment does not appear to be associated with an increased risk of cardiac dysfunction.In studies of Trastuzumab treatment beyond progression, cardiac events appear to be relatively uncommon and mostly asymptomatic (14-16).
CONCLUSION
We suggest that a number of patients experience prolonged remission while receiving Trastuzumab maintenance therapy.We propose that the molecular profile of a tumor and its biological environment, as governed by the specific traits of a patient, will influence whether a patient achieves long-lasting remission on maintenance Trastuzumab therapy.Maybe the specific localization of breast cancer metastases may be a factor for long survival as many of the cases reported to date are mainly associated with liver metastases (12,13).Why this might be contributory needs additional investigation.
|
2017-08-15T00:46:10.069Z
|
2012-02-07T00:00:00.000
|
{
"year": 2012,
"sha1": "6f5c955f23e104f357b23c6121f4416c993b7d0d",
"oa_license": "CCBYSA",
"oa_url": "http://www.journal-imab-bg.org/issue-2012/book1/JofIMAB2012vol18b1p213-215.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6f5c955f23e104f357b23c6121f4416c993b7d0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1777533
|
pes2o/s2orc
|
v3-fos-license
|
Assessing SNP genotyping of noninvasively collected wildlife samples using microfluidic arrays
Noninvasively collected samples are a common source of DNA in wildlife genetic studies. Currently, single nucleotide polymorphism (SNP) genotyping using microfluidic arrays is emerging as an easy-to-use and cost-effective methodology. Here we assessed the performance of microfluidic SNP arrays in genotyping noninvasive samples from grey wolves, European wildcats and brown bears, and we compared results with traditional microsatellite genotyping. We successfully SNP-genotyped 87%, 80% and 97% of the wolf, cat and bear samples, respectively. Genotype recovery was higher based on SNPs, while both marker types identified the same individuals and provided almost identical estimates of pairwise differentiation. We found that samples for which all SNP loci were scored had no disagreements across the three replicates (except one locus in a wolf sample). Thus, we argue that call rate (amplification success) can be used as a proxy for genotype quality, allowing the reduction of replication effort when call rate is high. Furthermore, we used cycle threshold values of real-time PCR to guide the choice of protocols for SNP amplification. Finally, we provide general guidelines for successful SNP genotyping of degraded DNA using microfluidic technology.
Supplementary Table S6
Pairwise F ST values for European wildcats and domestic cats with microsatellite data (above the diagonal, n = 24 samples and 14 loci) and SNP data (below the diagonal, n = 35 samples and 65 loci). Note that the SNP panel we used here was designed to detect hybridization of wildcats and domestic cats (that is, maximize differentiation). All samples were collected in Germany. Potential hybrids (based on SNP data) were excluded from these analyses (n = 1 for msats, n = 2 for SNPs). Probability values were based on 999 permutations; ***p ≤ 0.001.
Supplementary Table S7
Pairwise F ST values for brown bears with microsatellite data (above the diagonal, n = 55 samples and 18 loci) and SNP data (below the diagonal, n = 55 samples and 69 loci). All samples were collected in Greece. Only groups with n > 5 were considered (16 bears from 5 locations excluded). Probability values were based on 999 permutations; n.s., not significant; *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001. Negative values were converted to zero. results for the most likely K (***), second most likely K (**) and third most likely K (*) as calculated with the Evanno method based on SNP and microsatellite genotypes and their combination. Colour-coded bars below the STRUCTURE plots correspond to the sample groupings based on sampling region (grey wolves, brown bears) or species identification (wildcats or domestic cats, based on SNP data). Figure S4 STRUCTURE plots showing results for the most likely K (***), second most likely K (**) and third most likely K (*) (upper panels) as calculated with the Evanno method (lower panels, respectively) based on SNP and microsatellite data sets. Colour-coded bars below the STRUCTURE plots correspond to the sample groupings based on sampling region (grey wolves, brown bears) or species identification (wildcats and domestic cats, based on SNP data).
Supplementary Figure S5
PCoA for wolves and wildcats showing outliers (SNP data, original data set). Further examination of these samples indicated low SNP call rates (wolf 71%; wildcats 77%, 78%). These samples were removed from the figure in the main text, Figure 3, and from further analyses.
Supplementary Figure S6
PCoA analyses for subsets of SNP and microsatellite markers used in this study to genotype grey wolves. Each point represents an individual's genotype, colour-coded to its sampling region. Subsets of loci were selected based on highest heterozygosity (H E ) for each locus.
Supplementary Figure S7
PCoA analyses for subsets of SNP markers used in this study to genotype grey wolves. Each point represents an individual's genotype, colour-coded to its sampling region. Subsets of loci were selected randomly; three times each case (a, b, c).
Supplementary Figure S8
PCoA analyses for subsets of SNP and microsatellite markers used in this study to genotype European wildcats, domestic cats and hybrids. Each point represents an individual's genotype, colour-coded to species identity. Subsets of loci were selected based on highest heterozygosity (H E ) for each locus.
Supplementary Figure S9
PCoA analyses for subsets of SNP and microsatellite markers used in this study to genotype European wildcats, domestic cats and hybrids. Each point represents an individual's genotype, colour-coded to species identity. Subsets of loci were selected based on highest F ST for each locus.
Supplementary Figure S10
PCoA analyses for subsets of SNP markers used in this study to genotype European wildcats, domestic cats and hybrids. Each point represents an individual's genotype, colour-coded to species identity. Subsets of loci were selected randomly; three times each case (a, b, c).
Supplementary Figure S11
PCoA analyses for subsets of SNP and microsatellite markers used in this study to genotype brown bears. Each point represents an individual's genotype, colour-coded to its sampling region. Subsets of loci were selected based on highest heterozygosity (H E ) for each locus.
Supplementary Figure S12
PCoA analyses for subsets of SNP markers used in this study to genotype brown bears. Each point represents an individual's genotype, colour-coded to its sampling region. Subsets of loci were selected randomly; three times each case (a, b, c).
Mitochondrial DNA sequencing
Brown bear hair samples were checked macroscopically for species identification, in order to avoid wild boar hairs. Grey wolf scats and cat hairs were checked for species identity using mtDNA sequencing in order to avoid samples from other species (mainly, fox, dog or domestic cat). PCR Table S1), 0.2 µl of Taq DNA polymerase (5 U/µl) (New England BioLabs) and 6.1 µl of molecular grade water. PCRs were performed in a T1 plus Thermocycler (Biometra). Initial denaturation was at 95 °C for 3 min, followed by 35 cycles of 94 °C for 30 s, 54 °C for 30 s, and 72 °C for 1 min and a final extension at 72 °C for 10 min. PCR products were purified with 2 µl Exonuclease I and FastAP™ Thermosensitive Alkaline Phosphatase mixture (1:2; Thermo Scientific) at 37 °C for 15 min, followed by 80 °C for 15 min and diluted 1:20 (scats) or 1:40 (hairs). Sequencing was performed using the BigDye Terminator 3.1 Cycle Sequencing Kit (Applied Biosciences) using a cycling protocol which involved an initial denaturation step at 95 °C for 60 s, followed by 30 cycles of 10 s at 96 °C, 10 s at 50 °C and 2 min at 60 °C. The products were purified using ABI-XTerminator beads (Applied Biosystems) and separated on an ABI 3730 DNA Analyzer (Applied Biosystems). Sequences of wolves and wildcats were aligned with Geneious v7.1.8 1 and aligned to our laboratory reference samples to identify haplotypes.
Microsatellite genotyping
Unlinked autosomal microsatellite data for grey wolves and European wildcats were obtained as part of the regular genetic monitoring conducted in our laboratory. Brown bear microsatellite genotyping data was obtained from collaborators in Greece 5 . The markers and laboratory procedures are described elsewhere (wolves, 6 ; wildcats, 7 ; brown bears, 8 ). Briefly, a multiple-tubes approach was applied for wolves and wildcats, as is common practice for noninvasive samples, including three Figure S1 Overview of numbers of samples and loci included in each analysis based on quality criteria.
Supplementary Table S3
Overview of individuals that were represented by multiple samples in SNP and microsatellite data sets. One mismatch at one locus was accepted to consider two genotypes as belonging to the same individual. Note that brown bear samples had been individualized using microsatellites in the course of a previous study and consequently no matching genotypes were found in this study. n, number of samples; f.a., sample failed in microsatellite amplification; NA, matching not available due to failed microsatellite amplification of one of the samples.
Supplementary Table S5
Pairwise F ST values for grey wolves with microsatellite data (above the diagonal, n = 30 samples and 13 loci) and SNP data (below the diagonal, n = 35 samples and 85 loci). All samples were collected in Germany. Only groups with n > 5 were considered (5 wolves from 2 locations excluded). Probability values were based on 999 permutations; *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001.
Supplementary Table S6
Pairwise F ST values for European wildcats and domestic cats with microsatellite data (above the diagonal, n = 24 samples and 14 loci) and SNP data (below the diagonal, n = 35 samples and 65 loci). Note that the SNP panel we used here was designed to detect hybridization of wildcats and domestic cats (that is, maximize differentiation). All samples were collected in Germany. Potential hybrids (based on SNP data) were excluded from these analyses (n = 1 for msats, n = 2 for SNPs). Probability values were based on 999 permutations; ***p ≤ 0.001.
Supplementary Table S7
Pairwise F ST values for brown bears with microsatellite data (above the diagonal, n = 55 samples and 18 loci) and SNP data (below the diagonal, n = 55 samples and 69 loci). All samples were collected in Greece. Only groups with n > 5 were considered (16 bears from 5 locations excluded). Probability values were based on 999 permutations; n.s., not significant; *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001. Negative values were converted to zero. PCoA analyses for subsets of SNP markers used in this study to genotype grey wolves. Each point represents an individual's genotype, colour-coded to its sampling region. Subsets of loci were selected randomly; three times each case (a, b, c).
Supplementary Figure S12
PCoA analyses for subsets of SNP markers used in this study to genotype brown bears. Each point represents an individual's genotype, colour-coded to its sampling region. Subsets of loci were selected randomly; three times each case (a, b, c).
|
2018-04-03T03:15:03.707Z
|
2017-09-07T00:00:00.000
|
{
"year": 2017,
"sha1": "2547cd515d21792dd215c3fd75692c315e7f2e83",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-10647-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fc9f69a74324ac359a10603629e8e4e1e112cfc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
92809983
|
pes2o/s2orc
|
v3-fos-license
|
Selection and genetic parameters for interpopulation hybrids between kouilou and robusta coffee
Selection of hybrid coffee plants coming from crosses between divergent populations is particularly relevant for the success of breeding programs. This study aimed to outline the best selection strategy in a hybrid population of Coffea canephora var. kouilou and robusta by estimating intrapopulation genetic parameters. Twenty full-sib progenies obtained by North Caroline II were installed in a randomized complete blocks design, with one plant per elementary plot. The following traits were evaluated: vegetative vigor, reaction to rust, plant height, diameter of canopy projection, maturity time, and PLANT BREEDING Article Selection and genetic parameters for interpopulation hybrids between kouilou and robusta coffee Humberto Fanelli Carvalho1*, Felipe Lopes da Silva2, Marcos Deon Vilela de Resende3, Leonardo Lopes Bhering4 1.Instituto Agronômico Programa de Pós-Graduação em Genética, Melhoramento Vegetal e Biotecnologia Campinas (SP), Brazil. 2.Universidade Federal de Viçosa Centro de Ciências Agrárias Departamento de Fitotecnia Viçosa (MG), Brazil. 3.Embrapa Florestas Pesquisa e Desenvolvimento Viçosa (MG), Brazil. 4.Universidade Federal de Viçosa Departamento de Biologia Geral Viçosa (MG), Brazil. *Corresponding author: humberto.fanelli@gmail.com Received: Apr. 12, 2018 – Accepted: Aug. 1, 2018 bean yield. Significant individual genotypic variance and heritability estimates lead to an effective selection. The multi-trait selection index carried out between progenies and at individual level provided 5% and 40% gain, respectively. Thus, intrapopulation selection in a hybrid population is a viable strategy for the selection of superior individuals to compose new crosses and clones for cultivars in the breeding program of C. canephora, even with unbalanced data.
INTRODUCTION
Coffee plant, originally from Africa, presents two species of economic importance: Coffea arabica L., known as arabica coffee, and Coffea canephora Pierre ex Froehner, known as robusta coffee.The latter represents about one-third of the coffee produced worldwide and has traits of great importance to the food and pharmaceutical industry, such as high caffeine content and high soluble solids content (Tran et al. 2016).
The species C. canephora is a diploid allogamous (2n = 2x = 22), with multistem shrubs and self-incompatible flowers (Carvalho 1946), being divided into two genetic groups.The Guinean group is phenotypically characterized by presenting long leaves, short size, short internodes, and for being tolerant to drought and susceptible to rust (caused by the fungus Hemileia vastatrix Berk. et. Br).The second group, referred to as Congolese, can be divided into five subgroups: SG1, SG2, B, C, and UW.Coffee plants of the SG1 subgroup are morphologically similar to those of the Guinean group.The individuals of the other subgroups present large, wide leaves; large grains; and are highly resistant to rust and highly susceptible to water stress (Fazuolli et al. 2009;Montagnon et al. 2012;Teixeira et al. 2017).
In the production chain of caffeinated coffee, those of the SG1 subgroup are known as C. canephora var.kouilou (in Brazil, they are known as conilon coffee), and the others are known as C. canephora var.robusta (or robusta coffee) (Montagnon et al. 2012).Studies carried out in Ivory Coast suggest that hybrids between these groups express heterosis due to the high genetic variability present in the species (Leroy et al. 1993;2014).Hybridization between parents of these groups, found in Brazil, is of great interest since it gathers traits of both groups in a single genotype and increases variability in the species' breeding.Thus, recurrent selection strategies can be used more efficiently.
This study aimed to estimate intrapopulation genetic parameters in C. canephora, to outline the best selection strategy in hybrid population between coffee plants of the kouilou and robusta groups by multi-trait selection index, and to select superior individuals to compose new crosses or clones for cultivars.
Plant material and experimental design
Twenty full-sib progenies were obtained by North Caroline design II (NCII), consisting of five male parents of the kouilou group and five female parents of the robusta group, all of them belonging to the breeding program of C. canephora of EPAMIG (Empresa de Pesquisa Agropecuária de Minas Gerais) (Table 1).The trial consisted of a randomized complete blocks design (RCBD), with a different number of individuals as representatives of each progeny, totaling 246 coffee plants.Elementary plots consisted of one coffee tree, spaced at 3.0 × 1.5 m apart.The experiment was installed in 2011 and evaluated in the years of 2013 and 2014, at the Experimental Field Oratórios, in the state of Minas Gerais (lat.20°25'51" S, long.42°48'21" W).
Evaluated traits
The evaluated agronomic traits were: vegetative vigor (VIG), evaluated by a scoring scale from 1 to 10, representing the worst plants and the best plants, respectively (Carvalho et al. 1979); fruit maturation cycle (MAT), classified by the scoring scale 1 = early, 2 = intermediate, and 3 = late; reaction to rust in the field (RUS), evaluated by the scoring scale 1 = immune, 2 = resistant, 3 = moderately resistant, 4 = moderately susceptible, and 5 = susceptible, modified from the scoring scale proposed by Eskes (1981); plant height (PH), measured in cm, from the soil to the last apical point of the coffee plant; diameter of canopy projection (DCP), measured in cm, perpendicular to the row, from the canopy center, with the greatest measure between both ends.Yield in 60 kg•ha -1 bags of green coffee (Y) was evaluated by collecting all the fruits of the experimental plot.Afterward, the total volume in liters (Vol) was determined by the expression Y = [(number of plants•ha -1 ) Vol]•360 -1 .For each bag of green coffee, 360 liters of freshly harvested coffee fruits were considered.Every trait was evaluated by year.
Statistical analyses
Genetic parameters were estimated according to the mixed model proposed by Resende (2007a).The model intended to estimate the variance components of individual genotypic effects, the genotype × harvest interaction, and the genetic parameters, given by (Eq.1): broad sense heritability at the progenies mean level.
where y is the data vector; r is the vector of replication effects (assumed as fixed), added to the overall mean; g is the vector of individual genotypic effects (assumed as random); p is the vector of plot effects (assumed as random); i is the vector of genotype × harvest interaction (random); and e is the error vector (random).X, Z, W, and T are the incidence matrices for the vectors r, g, p, and i, respectively.
The estimated components of variance were: σ 2 g = component of individual genotypic variance; σ 2 a = component of additive variance, ignoring 1/4 of the variance due to dominance, and the fraction, due to epistasis; σ 2 p = component of variance for plot effect; σ 2 i = component of variance due to the genotype × harvest interaction; σ 2 = component of variance for residual error.The equations for genetic parameters, proposed by Resende (2002), are (Eqs.2 to 5): narrow sense individual heritability. (1)
(1)
where y is the data vector; r is the vector of replication effects (assumed as fixed), added to the overall mean; g is the vector of individual genotypic effects (assumed as random); p is the vector of plot effects (assumed as random); i is the vector of genotype × harvest interaction (random); and e is the error vector (random).X, Z, W, and T are the incidence matrices for the vectors r, g, p, and i, respectively.(2) (6)
5
(1 where y is the data vector; r is the vector of replication effects (assumed as fixed), added to the overall mean; g is the vector of individual genotypic effects (assumed as random); p is the vector of plot effects (assumed as random); i is the vector of genotype × harvest interaction (random); and e is the error vector (random).X, Z, W, and T are the incidence matrices for the vectors r, g, p, and i, respectively.proposed by Resende (2002), are (Eqs.2 to 5): (2) narrow sense individual heritability (3) broad sense heritability at the progenies mean level (4) genotypic correlation of the genetic material through harvests The genetic divergence between progenies was carrie clustering method (Rao 1952) by the matrix of genetic dista Resende (2007a), obtained from the predicted values, using the matrix of these genetic values, as follows (Eq.6): (6) where = Mahalanobis distance between genotypes i and i' variance and covariance; δ = [d1, d2, ... dj], being dj = Yij -Yi' th genotype in relation to the j-th variable.
The computational statistical package SELEGEN-REM was used for the resolution of the genetic statistical analysis.
selection accuracy of progenies means between replications and harvests Selection was based on the multi-trait index proposed by Resende (2007b), obtained by the sum between the product of the predicted genotypic value and its respective economic weight for each trait, having Y as the main trait.The economic weight was established according to Viana and Resende (2014).Therefore, the individual additive genetic value was used for the selection of the recombinant population, and the individual genotypic values were used for the selection of clones to compose the index.
The genetic divergence between progenies was carried out using the Tocher's clustering method (Rao 1952) by the matrix of genetic distances of Mahalanobis, by Resende (2007a), obtained from the predicted values, using the variance and covariance matrix of these genetic values, as follows (Eq.6): where D 2 ii' = Mahalanobis distance between genotypes i and i'; G = matrix of genotypic variance and covariance; δ = [d 1 , d 2 , ... d j ], being d j = Y ij -Y i'j ; and Y ij = mean of the i-th genotype in relation to the j-th variable.
The computational statistical package SELEGEN-REML/ BLUP (Resende 2016) was used for the resolution of the genetic statistical analysis.
Genetic parameters
Individual genotypic variance (σ 2 g ) was significant (Table 2) for the traits RUS, PH, DCP (p ≤ 0.01), and MAT (p ≤ 0.05), allowing the exploitation of the variability of the hybrid progenies per se, the estimation of heritability, and the promotion of selection.Thus, narrow-sense individual heritability (h 2 a ) was estimated, and mean values were calculated for all traits, ranging from 0.23 to 0.40, except for Y, which presented a low value of 0.15 (Table 2).These values are approximately the same as those estimated for C. canephora in Ivory Coast for Y heritability from crosses between Guinean and Congolese groups (Montagnon et al. 2003).These authors explain that estimates are influenced by a small progeny size and a few number of parents (around ten) between crosses.Yield is a quantitative trait largely influenced by the environment conditions.The low number of evaluations over the years can influence heritability estimations.The Y data evaluated over 14 years revealed higher heritability in Cameroon (Cilas et al. 2003).The literature shows low heritability for Y, being 0.0027 for arabica coffee from Brazil (Petek et al. 2008) and approximately 0.26 for arabica coffee from Cameroon (Cilas et al. 1998).
Broad-sense progenies mean heritability (h 2 mp ) was estimated, and mean values for the traits ranged from 0.44 to 0.56.Carias et al. (2016) reported moderate magnitude h 2 mp values in conilon coffee progenies, ranging from 0.22 to 0.53.The parameters estimator used in this work include unbalanced data and allow a better estimate, which is equivalent to that reported by Piepho and Möhring (2007).
According to Falconer (1987), population variability is essential to obtain selection gains, and heritability is the genetic parameter of greater importance for plant breeders since it determines the response to selection.
Thus, full-sib progenies have enough additive genetic variance for selection between and within progenies to be exploited in selection cycles, indicating intrapopulation recurrent selection (IRS) as a viable strategy to obtain superior hybrids and advance the selection cycles.Along with recombinant population, superior individuals can be extracted to compose clonal tests.
High accuracy values (Table 2) are observed for RUS, PH, and MAT, ranging from 0.72 to 0.75, and intermediate values are detected for DCP, VIG, and Y, varying from 0.59 to 0.67.Resende (2007a) emphasizes that accuracy is a good measure to evaluate the quality of the experiment (r âa ), and the values observed in the present study are high (r âa ≥ 0.70) and intermediate (0.40 ≤ r âa ≤ 0.70).High selective accuracy (r âa ) reveals that the predicted values are close to the real value, suggesting good accuracy of the selection method used, although the experiment was carried out under unbalanced data, with a single individual per elementary plot, and with a variable number of replications between treatments.Elementary plots containing a single individual and with more replications improve the statistical analysis in perennial species (Resende 2002).
Genotypic correlation between harvests (r gcolh ) was considered of high magnitude (above 0.70) for all traits, except for Y (Table 2).According to Resende (2007a), estimates equal to or greater than 0.70 indicate that the genotype × harvest interaction is simple, whereas smaller values are complex.Simple interactions do not change the classification of genotypes in different harvests; conversely, complex interactions, such as crossover interaction, indicate difficulties in genotypes selection, changing their classification between measures and leading to selection based on the genotype × environment interaction (Malosetti et al. 2013).Thus, for Y, selection can be more efficient if the trait is evaluated in multiple harvests.More evaluations over the years increase selection efficiency (Fonseca et al. 2004); however, for the other traits, selection efficiency is not influenced by the number of harvests.
Selection between and within progenies
The multi-trait index was used in the selection of the best progenies, based on the individual additive genetic value, aiming at the selection of individuals to compose the recombinant population.This method is beneficial when assigning weights to the traits since the index directly uses the predicted values and the weights related to the correlation with the main trait (Y).For the construction of the index, the economic weight for each trait was equal to RUS (0.1916), PH (0.1653), DCP (0.20), VIG (0.14), and Y (0.2972).MAT did not compose the index due to the interest in selecting individuals with different fruit maturation cycles, which consequently allowed obtaining populations with genotypes of early, intermediate, or late maturation.
The selection of the ten best progenies (5,9,22,4,12,2,10,15, 11 e 16) provided 5% genetic gain by the index.Therefore, 28 plants were selected (the three best plants within each selected progeny).For progeny 22, only one plant was selected due to unbalance within progenies (Table 3).Selected plants are associated with the effective progeny size (Nef) of 15 progenies, which happens to be the equivalent number of unrelated individuals, resulting in a low coefficient of inbreeding of 0.033 (F = 1(2Nef) -1 ).To avoid selection of related individuals without reducing the selection intensity and genetic gain, the maximum number of selected genotypes should be restricted, as suggested by Resende and Barbosa (2005).Thus, for full-sib progenies, the authors describe Nef, which is given by (Eq.7): selected population in approximately four bags of benefited coffee•ha -1 more than that of the original population (Tables 2 and 3).This gain may be masked by environmental variance since these traits were not significant for individual genotypic variance.To obtain real increases in selection gains, the variability for these traits in the recombinant population must be increased.High values of genetic gain for yield hybrid progenies of C. canephora (65%) were reported by Leroy et al. (1997).Conversely, lower values were detected by Mistro et al. (2004), ranging from 15% to 8.15% when increasing the number of selected progenies of robusta coffee, based on the selection of only one variable.
Cluster analysis by the Tocher's method (Rao 1952) using the genetic distances of Mahalanobis as dissimilarity measure formed five progenies clusters, in which cluster I was formed by 13 progenies; cluster II was composed of three progenies; cluster III was formed by two progenies; and clusters IV and V were composed of one progeny each (Table 4).Clusters are different between and homogeneous within each other.The five clusters showed the most similar and divergent progenies, and those belonging to the distinct groups are more divergent when compared with those belonging to the same group, which simplifies the understanding of the population structure.Ivoglo et al. (2008) also identified four clusters between hybrid progenies from robust and conilon (kouilou) coffee.The selected progenies (5, 9, 22, 4, 12, 2, 10, 15, 11, and 16) are of distinct clusters, containing representatives of clusters I, II, III, IV, and V. Therefore, the cross between progenies of different clusters increases the probability of favorable combinations in offspring to compose the crossing blocks.Resende et al. (2014) emphasize that if the goal is to create more variability or promote heterosis, the cross between genetically different clusters is preferable.
Clones Selection
The plants selected for cloning were originated from seven of the ten progenies obtained and evaluated in this work (5, 9, 22, 4, 12, 2 and 15).The best plants were selected based on the multi-trait index, using the calculated economic weights and the individual genotypic values, regardless of the selection of the best progenies.At 20% intensity selection, by selecting 48 coffee plants belonging to different progenies, 40% genetic gain was obtained.By evaluating Y of the selected genotypes, an increase in mean yield of 9.58 bags of benefited coffee per hectare was obtained when compared Nef = N (2n) (n + 1) -1 (7) where N is the number of progenies and n is the number of individuals per progeny.The population selected (28 plants corresponding to the ten progenies) presented selection gains when the traits were analyzed separately (Table 3).MAT had an indirect selection in the negative direction (-31%), and thus the plants selected based on the other traits presented early maturation cycle.Gain for RUS was zero, which evidences, based on genetic values, the tolerance of plants to the disease causative agent.For PH and DCP, gains were 2% and 3%, respectively, leading to the selection of coffee plants slightly taller when compared with their general means and the selected genotypes.
VIG and Y had gains of 3% and 12%, respectively, which reflects in an increase (or selection gain) in the mean of the with the general mean of the population (Table 2), totaling a mean of 42.98 bags of benefi ted coff ee per hectare (data not shown in tables).Th is fact is fundamental for the C. canephora breeding program.The use of different genotypes for the locus controlling this trait should be considered in commercial varieties.Ferrão et al. (2007) suggest that a clonal variety is composed of at least eight diff erent clones, which ensures good sustainability of the activity and prevents risks of genetic vulnerability.
Selection between and within progenies to compose the recombinant population should contain diff erent crosses and ensure the genetic variability of the population since it will go through several IRS cycles in a breeding program.At the same time, individuals should be selected by the genotypic value and subsequently cloned, which will reduce the time to obtain a clonal cultivar, as proposed by Ferrão et al. (2007).
CONCLUSION
Genetic variability was observed by the estimates of intrapopulation genetic parameters, with selection gains successfully obtained from the multi-trait selection index, even with unbalanced data.Th us, the selection in hybrids derived from crosses between the groups robusta and kouilou, combined with the robustness of mixed models methodology, proved to be a viable selection strategy for the Coff ea canephora breeding program.
additive variance, ignoring ¼ of the variance due to dominance, and the fraction, due to epistasis; = component of variance for plot effect; = component of variance due to the genotype × harvest interaction; = component of variance for residual error.The equations for genetic parameters, proposed by Resende (2002), are (Eqs.2 to 5): y Xr Zg Wp Ti e = + + + + The estimated components of variance were: = component of individual genotypic variance; = component of additive variance, ignoring ¼ of the variance due to dominance, and the fraction, due to epistasis; = component of variance for plot effect; = component of variance due to the genotype × harvest interaction; = component of variance for residual error.The equations for genetic parameters, Selection was based on the multi-trait index propos obtained by the sum between the product of the predicted respective economic weight for each trait, having Y as the m weight was established according to Viana and Resende individual additive genetic value was used for the select population, and the individual genotypic values were used for compose the index.
Table 3 .
Multi-trait selection index for the traits: reaction to rust (RUS), plant height (PH), diameter of canopy projection (DCP), vegetative vigor (VIG), maturation time (MAT), and yield (Y) with the selection of the three best individuals within the selected Coffea canephora progenies.= replications, *Progenies with only one representative individual; **SG = Percentage of selection gain. REP
|
2019-04-03T13:10:57.995Z
|
2019-01-11T00:00:00.000
|
{
"year": 2019,
"sha1": "cd14d8fd1fc89e53cc95759730074673f26d46b8",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/brag/v78n1/0006-8705-brag-1678-44992018124.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cd14d8fd1fc89e53cc95759730074673f26d46b8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
237413309
|
pes2o/s2orc
|
v3-fos-license
|
Artificial Intelligence in Dry Eye Disease
Dry eye disease (DED) has a prevalence of between 5 and 50\%, depending on the diagnostic criteria used and population under study. However, it remains one of the most underdiagnosed and undertreated conditions in ophthalmology. Many tests used in the diagnosis of DED rely on an experienced observer for image interpretation, which may be considered subjective and result in variation in diagnosis. Since artificial intelligence (AI) systems are capable of advanced problem solving, use of such techniques could lead to more objective diagnosis. Although the term `AI' is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes. Powerful machine learning techniques have been harnessed to understand nuances in patient data and medical images, aiming for consistent diagnosis and stratification of disease severity. This is the first literature review on the use of AI in DED. We provide a brief introduction to AI, report its current use in DED research and its potential for application in the clinic. Our review found that AI has been employed in a wide range of DED clinical tests and research applications, primarily for interpretation of interferometry, slit-lamp and meibography images. While initial results are promising, much work is still needed on model development, clinical testing and standardisation.
Introduction
Dry Eye Disease (DED) is one of the most common eye diseases worldwide, with a prevalence of between 5 and 50%, depending on the diagnostic criteria used and study population [1]. Yet, although symptoms stemming from DED are reported as the most common reason to seek medical eye care [1], it is considered one care systems. The sub-field of machine learning known as deep learning uses deep artificial neural networks, and has gained increased attention in recent years, especially for its image and text recognition abilities. In the field of ophthalmology, deep learning has so far mainly been used in the analysis of data from the retina to segment regions of interest in images, automate diagnosis and predict disease outcomes [10]. For instance, the combination of deep learning and optical coherence tomography (OCT) technologies has allowed reliable detection of retinal diseases and improved diagnosis [11]. Machine learning also has potential for use in the diagnosis and treatment of anterior segment diseases, such as DED. Many of the tests used for DED diagnosis and follow-up rely on the experience of the observer for interpretation of images, which may be considered subjective [12]. AI tools can be used to interpret images automatically and objectively, saving time and providing consistency in diagnosis.
Several reviews have been published that discuss the application of AI in eye disease, including screening for diabetic retinopathy [13], detection of age-related macular degeneration [14] and diagnosis of retinopathy of prematurity [15]. We are, however, not aware of any review on AI in DED. In this article, we therefore provide a critical review of the use of AI systems developed within the field of DED, discuss their current use and highlight future work.
Artificial intelligence
AI is informational technology capable of performing activities that require intelligence. It has gained substantial popularity within the field of medicine due to its ability to solve ubiquitous medical problems, such as classification of skin cancer [16], prediction of hypoxemia during surgeries [17] and identification of diabetic retinopathy [18]. Machine learning is a sub-field of AI encompassing algorithms capable of learning from data, without being explicitly programmed. All AI systems used in the studies included in this review, fall within the class of machine learning. The process by which a machine learning algorithm learns from data is referred to as training. The outcome of the training process is a machine learning model, and the model's output is referred to as predictions. Different learning algorithms are categorised according to the type of data they use, and referred to as supervised, unsupervised and reinforcement learning. The latter is excluded from this review, as none of the studies use it, while the two former are introduced in this section.
A complete overview of the algorithms encountered in the reviewed studies is provided in Figure 1, sorted according to the categories described below.
Supervised learning
Supervised learning denotes the learning process of an algorithm using labelled data, meaning data that contains the target value for each data instance, e.g., tear film lipid layer category. The learning process involves extracting patterns linking the input variables and the target outcome. The performance of the resulting model is evaluated by letting it predict on a previously unseen data set, and comparing the predictions to the true data labels. See Section 2.5 for a brief discussion of evaluation metrics. Supervised learning algorithms can perform regression and classification, where regression involves predicting a numerical value for a data instance, and classification involves assigning data instances to predefined categories. Figure 1 contains an overview of supervised learning algorithms encountered in the reviewed studies.
Unsupervised learning
Unsupervised learning denotes the training process of an algorithm using unlabelled data, i.e. data not containing target values. The task of the learning algorithm is to find patterns or data groupings by constructing a compact representation of the data. This type of machine learning is commonly used for grouping observations together, detecting relationships between input variables, and for dimensionality reduction. As unsupervised learning data contains no labels, a measure of model performance depends on considerations outside the data [see 19, chap. 14], e.g., how the task would have been solved by someone in the real world.
For clustering algorithms, similarity or dissimilarity measures such as the distance between cluster points can be used to measure performance, but whether this is relevant depends on the task [20]. Unsupervised algorithms encountered in the reviewed studies can be divided into those performing clustering and those used for dimensionality reduction, see Figure 1 for an overview.
Artificial neural networks and deep learning
Artificial neural networks are loosely inspired by the neurological networks in the biological brain, and consist of artificial neurons organised in layers. How the layers are organised within the network is referred to as its architecture. Artificial neural networks have one input layer, responsible for passing the data to the network, and one or more hidden layers. Networks with more than one hidden layer are called deep neural networks. The final layer is the output layer, providing the output of the entire network. Deep learning is a sub-field of machine learning involving training deep neural networks, which can be done both in a supervised and unsupervised manner. We encounter several deep architectures in the reviewed studies. The two more advanced types are convolutional neural networks (CNNs) and generative adversarial networks (GANs). CNN denotes the commonly used architecture for image analysis and object detection problems, named for having so-called convolutional layers that act as filters identifying relevant features in images. CNNs have gained popularity recently and all of the reviewed studies that apply CNNs were published in 2019 or later. Advanced deep learning techniques will most likely replace the established image analysis methods. This trend has been observed within other medical fields such as gastrointestinal diseases and radiology [21,22]. A GAN is a combination of two neural networks: A generator and a discriminator competing against each other. The goal of the generator is to produce fake data similar to a set of real data. The discriminator receives both real data and the fake data from the generator, and its goal is to discriminate the two. GANs can be used i.a. to generate synthetic medical data, alleviating privacy concerns [23].
Workflow for model development and validation
The data used for developing machine learning models is ideally divided into three independent parts: A training set, a validation set and a test set. The training set is used to tune the model, the validation set to evaluate performance during training, and the test set to evaluate the final model. A more advanced form of training and validation, is k-fold cross-validation. Here, the data is split into k parts, of which one part is set aside for validation, while the model is trained on the remaining data. This is repeated k times, and each time a different part of the data is used for validation. The model performance can be calculated as the average performance for the k different models [see 19, chap. 7]. It is considered good practice to not use the test data during model development and vice versa, the model should not be tuned further once it has been evaluated on the test data [see 19, chap.7]. In cases of class imbalance, i.e., unequal number of instances from the different classes, there is a risk of developing a model that favors the prevalent class. If the data is stratified for training and testing, this might not be captured during testing. Class imbalance is common in medical data sets, as there are for instance usually more healthy than ill people in the population [24].
Whether to choose a class distribution that represents the population, a balanced or some other distribution depends on the objective. Various performance scores should regardless always be used to provide a full picture of the model's performance.
Performance scores
In order to assess how well a machine learning model performs, its performance can be assigned a score. In supervised learning, this is based on the model's output compared to the desired output. Here, we introduce scores used most frequently in the reviewed studies. Their definitions as well as the remaining scores used are provided in Appendix A.1. A commonly used performance score in classification is accuracy, eq. (A.3), which denotes the proportion of correctly predicted instances. Its use is inappropriate in cases of strong class imbalance, as it can reach high values if the model always predicts the prevalent class. The sensitivity, also known as recall, eq. (A.4), denotes the true positive rate. If the goal is to detect all positive instances, a high sensitivity indicates success. The precision, eq. (A.5), denotes the positive predictive value. The specificity, eq. (A.6), denotes the true negative rate, and is the negative class version of the sensitivity. The F1 score, eq. (A.7), is the harmonic mean between the sensitivity and the precision. It is not symmetric between the classes, meaning it is dependent on which class is defined as positive.
Image segmentation involves partitioning the pixels in an image into segments [25]. This can for example be used to place all pixels representing the pupil into the same segment while pixels representing the iris are placed in another segment. The identified segments can then be compared to manual annotations.
Performance scores used include the Average Pompeiu-Hausdorff distance, (A.17), the Jaccard index and the support, all described in Appendix A.1.
AI regulation
Approved AI devices will be a major part of the medical service landscape in the future. Currently, many countries are actively working on releasing AI regulations for healthcare, including the European Union (EU), the United States, China, South Korea and Japan. On 21 April 2021, the EU released a proposal for a regulatory framework for AI [26]. The US Food and Drug Administration (FDA) is also working on AI legislation for healthcare [27].
In the framework proposed by the EU, AI systems are divided into the four categories low risk, minimal risk, high risk and unacceptable risk [26]. AI systems that fall into the high risk category are expected to be subject to strict requirements, including data governance, technical documentation, transparency and provision of information to users, human oversight, robustness and cyber security, and accuracy. It is highly likely that medical devices using AI will end up in the high risk category. Looking at the legislation proposals [26,27] Three of the studies found in the searches including "ocular surface" were also found among the studies in the searches including "dry eye". assessment, robustness against adversarial attacks, high quality of data sets, proper performance assessment, continuous post-deployment monitoring, human oversight and interaction between AI systems and humans, will be major research topics for the development of AI in healthcare.
Search methods
A systematic literature search was performed in PubMed and Embase in the period between March 20 and May 21, 2021. The goal was to retrieve as many studies as possible applying machine learning to DED related data. The following keywords were used: All combinations of "dry eye" and "meibomian gland dysfunction" with "artificial intelligence", "machine learning", "computer vision", "image recognition", "bayesian network", "decision tree", "neural network", "image based analysis", "gradient boosting", "gradient boosting machine" and "automatic detection". In addition, searches for "ocular surface" combined with both "artificial intelligence" and "machine learning" were made. See also an overview of the search terms and combinations in Figure 2.
No time period limitations were applied for any of the searches.
Selection criteria
The studies to include in the review had to be available in English in full-text. Studies not investigating the medical aspects of DED were excluded (e.g., other ocular diseases and cost analyses of DED). Moreover, the studies had to describe the use of a machine learning model in order to be considered. Reviews were not considered. The studies were selected in a three-step process. One review author screened the titles on the basis of the inclusion criteria. The full-texts were then retrieved and studied for relevance. The search Tables 1 to 4 for the clinical, biochemical and demographical studies, respectively.
Information on the data used in each study is shown in Table 5. We grouped studies according to the type of clinical test or type of study: TBUT, interferometry and slit-lamp images, IVCM, meibography, tear osmolarity, proteomics analysis, OCT, population surveys and other clinical tests. We found most studies employed machine learning for interpretation of interferometry, slit-lamp and meibography images.
Fluorescein tear break-up time
Shorter break-up time indicates an unstable tear film and higher probability of DED. Machine learning has been employed to detect dry areas in TBUT videos and estimate TBUT [12,63,57,58]. Use of the Levenberg-Marquardt algorithm to detect dry areas achieved an accuracy of 91% compared to assessments by an optometrist [12]. Application of Markov random fields to label pixels based on degree of dryness was used to estimate TBUT resulting in an average difference of 2.34 seconds compared to clinician assessments [63].
Polynomial functions have also been used to determine dry areas, where threshold values were fine-tuned before estimation of TBUT [57]. This method resulted in more than 90% of the videos deviating by less than ±2.5 seconds compared to analyses done by four experts on videos not used for training [58]. Taken together, these studies indicate that TBUT values obtained using automatic methods are within an acceptable range compared to experts. However, we only found four studies, all of them including a small number of subjects.
Further studies are needed to verify the findings and to test models on external data.
Interferometry and slit-lamp images
Interferometry is a useful tool that gives a snapshot of the status of the tear film lipid layer, which can be used to aid diagnosis of DED. Machine learning systems have been applied to interferometry and slit-lamp images for lipid layer classification based on morphological properties [60,59,55,54,52,34,35], estimation of the lipid layer thickness [50,36], diagnosis of DED [49,47], determination of ocular redness [61] and estimation of tear meniscus height [48,29].
Diagnosis of DED can be based on the following morphological properties: open meshwork, closed meshwork, wave, amorphous and color fringe [74]. Most studies used these properties to automatically classify interferometer lipid layer images using machine learning. [59,55,54]. In one of the studies, the same data was used for training and testing, which is not ideal [55]. Another study did not report the data their system was trained on [54]. Peteiro et al. evaluated images using five different machine learning models [52]. In this study, the amorphous property was not included as one of possible classifications, as opposed to the other studies. A simple neural network achieved the overall best performance with an accuracy of 96%. However, because leave-one-out cross validation was applied, the model may have overfitted on the training data [19].
da Cruz et al. compared six different machine learning models and found that the random forest was the best classifier, regardless of the pre-processing steps used [34,35]. The highest performance was achieved by application of Ripley's K function in the image pre-processing phase, and Greedy Stepwise technique used simultaneously with the machine learning models for feature selection [35]. Since all models were evaluated with cross validation, the system should be externally evaluated on new images before being considered for routine use in the clinic.
Hwang et al. investigated whether tear film lipid layer thickness can be used to distinguish meibomian
gland dysfunction (MGD) severity groups [50]. Machine learning was used to estimate the thickness from Lipiscanner and slit-lamp videos with promising results. Images were pre-processed and the flood-fill algorithm and canny edge detection were applied to locate and extract the iris from the pupil. A significant difference between two MGD severity groups was detected, suggesting that the technique could be used for the evaluation of MGD. Keratograph images can also be used to determine tear film lipid layer thickness.
Comparison of two different image analysis methods using a generalized linear model showed that there was a high correlation between the two techniques [36]. The authors concluded that the simple technique was sufficient for evaluation of tear film lipid layer thickness. However, only 28 subjects were included in the study.
The use of fractal dimension estimation techniques was investigated for feature extraction from interferometer videos for diagnosis of DED [49]. The tear meniscus contains 75 − 90% of the aqueous tear volume [75]. Consequently, the tear meniscus height can be used as a quantitative indicator for DED caused by aqueous deficiency. When connected component labelling was applied to slit-lamp images, the Pearson's correlation between the predicted meniscus heights and an established software methodology (ImageJ [76]) was high, ranging between 0.626 and 0.847 [48]. The machine learning system was found to be more accurate than four experienced ophthalmologists. The tear meniscus height can also be estimated from keratography images using a CNN [29]. The automatic machine learning system achieved an accuracy of 82.5% and was found to be more effective and consistent than a well-trained clinician working with limited time.
Many of the studies apply SVM as their type of machine learning model without testing how other machine learning models perform. However, three of the studies tested several types of models and found that SVM did not perform the best [52,34,35]. It is difficult to compare the studies due to different applications and evaluation metrics. Despite promising results, most of the studies [60,59,55,52,34,35,50,36,49,61,48] did not evaluate their systems on external data. The systems should be tested on independent data before they can be considered for clinical application. Moreover, some studies were small [61,36] or pilots [48,29], and the suggested models should be tested on a larger number of subjects.
In vivo confocal microscopy
IVCM is a valuable non-invasive tool used to examine the corneal nerves and other features of the cornea [77]. IVCM images were used in a small study to assess characteristics of the corneal subbasal nerve plexus for diagnosis of DED [42]. Application of random forest and a deep neural network [43] gave promising results with an AUC value of 0.828 for detecting DED [42]. IVCM images of corneal nerves can also be analyzed by machine learning models to estimate the length of the nerve fiber [41]. Authors used a CNN with a U-net architecture that had been pre-trained on more than 5, 000 IVCM images of corneal nerves. The model showed that nerve fiber length was significantly longer after intense pulsed light treatment in MGD patients, which agreed with manual annotations from an experienced investigator with an AUC value of 0.96 and a sensitivity of 0.96. High-resolution IVCM images were also used to detect obstructive MGD [38]. Combinations of nine different CNNs were trained and tested on the images using 5-fold cross validation. Classification by the models was compared to diagnosis made by three eyelid specialists. The best performance was achieved when four different models were combined, with high sensitivity, specificity and AUC values, see Table 1. These promising results suggest that CNNs can be useful for detection and evaluation of MGD. Deep learning methods such as CNNs have the advantage that feature extraction from the images prior to analysis is not required as this is performed automatically by the model.
IVCM images have been investigated for changes in immune cells across different severities of DED for
diagnostic purposes [28]. A generalized linear model showed significant differences in dendritic cell density and morphology between DED patients and healthy individuals, but not between the different DED subgroups, see Table 1. While results using machine learning to interpret IVCM images are promising, larger clinical studies are needed to validate findings before clinical use can be considered.
Meibography
The meibomian glands are responsible for producing meibum, important for protecting the tear fluid from evaporation. Reduced secretion of meibum due to a reduced number of functional meibomian glands and/or obstruction of the ducts is a major cause of evaporative DED and MGD. Classification of meibomian glands using meibography is routine for experienced experts, but this is not the case for all clinicians. Moreover, automatic methods can be faster than human assessment.
Meibography images may require several pre-processing steps before they can be classified. One study trained an SVM on extracted features from the images [62]. Pre-processing included the dilation, flood-fill, skeletonization and pruning algorithms. The model achieved a sensitivity of 0.979 and specificity of 0.961.
However, in contrast to all other image analysis methods, this method is not completely automatic as the images need to be manipulated manually before they are passed on to the system.
A combination of Otsu's method and the skeletonization and watershed algorithms was useful in automatically quantifying meibomian glands [53]. This method was faster than an ophthalmologist and achieved a sensitivity and specificity of 0.993 and 0.975, respectively. Another automatic method applied Bézier curve fitting as part of the analysis [51]. The reported sensitivity was 1.0, while the specificity was 0.98. Xiao et al. sequentially applied a Prewitt operator, Graham scan, fragmentation and skeletonization algorithms for image analysis to quantify meibomian glands [32]. The agreement between the model results and two ophthalmologists was high with Kappa values larger than 0.8 and low false positive rates (< 0.06). The false negative rate was 0.19, suggesting that some glands were missed by the method. A considerable weakness of this study was that only 15 images were used for model development, and consequently it might not work well on unseen data. Another study automatically graded MGD severity using a Sobel operator, polynomial functions, fragmentation algorithm and Otsu's method [45]. While the method was found to be faster, the results were significantly different from clinician assessments.
Deep learning approaches were used by four studies evaluating meibomian gland features [46,39,33,31]. relationships between meibography images. Another study used a CNN to automatically assess meibomian gland characteristics [39]. Images from two different devices collected from various hospitals were used to train and evaluate the CNN. This is an example of uncommonly good practice, as most medical AI systems are developed and evaluated on data from only one device and/or hospital. The only study to use a GAN architecture tested it on infrared 3D images of meibomian glands in order to evaluate MGD [31]. Comparing the model output with true labels, the performance scores were better than for state of the art segmentation methods. The Pearson correlations between the new automated method and two clinicians were 0.962 and 0.968.
Four of the studies did not evaluate their proposed systems on external data [53,51,45,32]. Since the number of images used for model development was limited, the models can have overfit, and external evaluations should be performed to test how well the systems generalize to new data.
Tear osmolarity
Tear osmolarity is a measure of tear concentration, and high values can indicate dry eyes. Cartes et al. [65] investigated use of machine learning to detect DED based on this test. Four different machine learning models were compared. Noise was added to osmolarity measurements during the training phase, while original data without noise was used for final evaluation. The logistic regression model achieved 85% accuracy. However, since the models were trained and tested on the same data, the reported score is most likely not representative for how well the model generalizes to new data.
Proteomic analysis
Proteomic analysis describes the qualitative and quantitative composition of proteins present in a sample. Grus et al. compared tear proteins in individuals with diabetic DED, non-diabetic DED and healthy controls for discrimination between the groups [70]. The authors used discriminant analysis and principal component analysis combined with k-means clustering. Both models achieved low accuracies when predicting all three categories. However, classification into DED and non-DED achieved accuracies of 72% and 71% for discriminant analysis and k-means clustering, respectively. In another study by the same group, tear proteins analyzed using deep learning discriminated subjects as healthy or having DED with an accuracy of 89% [69].
An accuracy of 71% was achieved using discriminant analysis. A combination of discriminant analysis for detecting the most important proteins and a deep neural network for classification was also investigated [68].
High accuracy, sensitivity and specificity were reported. Discriminant analysis was also used by Gonzalez et al. in analysis of the tear proteome [67]. The most important proteins were selected to train an artificial neural network to classify tear samples as aqueous-deficient DED, MGD or healthy. The model gave an overall accuracy of 89.3%. Principal component analysis yielded good separation of healthy controls, aqueous-deficient conditions. This system achieved the highest accuracy of all the reviewed proteomic studies. Considered together, the results from the four studies [70,69,68,67] suggest that neural networks applied alone or together with other techniques perform better than discriminant analysis for detecting DED-related protein patterns in the tear proteome.
Jung et al. used a network model based on modularity analysis to describe the tear proteome with respect to immunological and inflammatory responses related to DED [66]. In this study, patterns in tears and lacrimal fluid were investigated in patients with DED. Since only 10 subjects were included, the study should be performed on a larger cohort of patients to verify the results.
Optical coherence tomography
Thickening of the corneal epithelium can be a sign of abnormalities in the cornea. Moreover, corneal thickness could potentially be a marker for DED. Kanellopoulos et al. developed a linear regression model to look for possible correlations between corneal thickness metrics measured using anterior segment optical coherence tomography (AS-OCT) and DED [56]. However, neither the model predictions nor performance were reported, making it difficult to assess the usefulness of the study. The type of instrument used to determine the corneal thickness was found to affect the results [37]. Measurements from AS-OCT and Pentacam were compared and multivariable regression was used to detect differences between the two techniques regarding the measured central corneal thickness and the thinnest corneal thickness. Individuals with mild DED, severe DED and healthy subjects were examined. The two techniques gave significantly different results in terms of the resulting β-coefficients in the multivariable regression model for individuals with severe DED. Images from clinical examinations with AS-OCT were used to diagnose DED [30]. A pretrained VGG19 CNN [78] was fine-tuned using separate images for training and validation. Two similar CNN models were developed, and evaluation was performed on an external test set. Both achieved impressively high performance scores.
The AUC values were 0.99 and 0.98. This is one out of two studies in this review that used an independent test sets after model development. Such practice is essential for a realistic impression of how well the model generalizes to new data not used during model development. The good performance is likely linked to the large amounts of training data (29, 000 images), which is essential for deep learning methods. Most of the reviewed studies use significantly smaller data sets, which constitutes a disadvantage. Stegmann et al. analysed OCT images from healthy subjects for automatic detection of the lower tear meniscus [40]. Two different CNNs were trained and evaluated using 5-fold cross validation. The tear menisci detected by the models were compared to evaluations from an experienced grader. The best CNN achieved an average accuracy of 99.95%, sensitivity of 0.9636 and specificity of 0.9998. The system is promising regarding fast and accurate segmentation of OCT images. However, more images from different OCT systems, including non-healthy subjects, should be used to verify and improve the analysis.
The two studies [78,40] showed that CNNs could be an appropriate tool for image analysis. CNNs are likely to increase in popularity within the field of DED due to promising results for solving image related tasks, including feature extraction.
Other clinical tests
Machine learning models were used to analyse results from a variety of clinical tests to expand understanding of the DED process [64]. The study included subjects with DED and healthy subjects. Subjective cutoff values from clinical tests were used to assign subjects to the DED class. Hierarchical clustering and a decision tree were applied sequentially to group the subjects based on their clinical test results. The resulting groups were compared to the original groups. Because the analysis was based on objective measurements, it could be used to develop more objective diagnostic criteria. This could lead to earlier detection and more effective treatment of DED.
Population surveys
Population surveys can provide valuable insight regarding the prevalence of DED and help detect risk factors for developing the disease. Japanese visual terminal display workers were surveyed with the objective of detecting DED [73]. Dry eye exam data and subjective reports were used for diagnosis. This was passed to a discriminant analysis model. When compared to diagnosis by a dry eye specialist, the model showed a high sensitivity of 0.931, but low specificity of 0.437. This is a very low specificity, but is not necessarily bad if the aim is to detect as many cases of DED as possible and there is less concern about misclassification of healthy individuals. Data from a national health survey were analysed in order to detect risk factors for DED [72].
Here, individuals were regarded as having DED if they had been diagnosed by an ophthalmologist, and were experiencing dryness. Feature modifications were performed by a decision tree, and the most important features were selected using lasso. β-coefficients from a logistic regression trained on the most important features were used to rank the features. Women, individuals who had received refractive surgery and those with depression were detected as having the highest risk for developing DED. Even though the models in the study were trained on data from more than 3500 participants, the reported performance scores were among the poorest in this review with a sensitivity of 0.66 and a specificity of 0.68. A possible reason could be that the selected features were not ideal for detecting DED. However, the detected risk factors have previously been shown to be associated with DED [3,79,80]. The findings suggest that the data quality from population surveys might not be as high as in other types of studies, which could lead to misinterpretation by the machine learning model.
The association between DED and dyslipidemia was investigated by combining data from two population surveys in Korea in [71]. A generalized linear model was used to investigate linear characteristics between features and the severity of DED. The model showed significant increase in age, blood pressure and prevalence of hypercholesterolemia over the range from no DED to severe DED. Evaluation of the association between dyslipidemia and DED using linear regression showed that the odds ratio for men with dyslipidemia was higher than 1 compared to men without dyslipidemia. This association was not found in women. The study results suggest a positive association between DED and dyslipidemia in men, but not in women.
Future perspectives
In order to benchmark existing and future models, we advocate that the field of DED should have a common, centralized and openly available data set for testing and evaluation. The data should be fully representative for the relevant clinical tests. In order to ensure that models are applicable to all populations of patients, medical institutions, and types of equipment around the world, they must be evaluated on data from different demographic groups of patients across several clinics and, if relevant, from different medical devices. Moreover, the test data set should not be available for model development, but only for final evaluation. A common standard on these processes will increase the reproducibility and comparability of studies. In addition, a cross hospitals/centers data set would solve important challenges of applying AI in clinical practice, such as metrics not reflecting clinical applicability, difficulties in comparing algorithms, and underspecification. These have all been identified as being among the main obstacles for adoption of any medical AI system in clinical practice [81,82].
A possible challenge regarding implementation in the clinic is that hospitals do not necessarily use the same data platforms, which might prevent widespread use of machine learning systems. Consequently, solutions for implementing digital applications across hospitals should be considered.
Model explanations are important in order to understand why a complex machine learning model produces a certain prediction. For healthcare providers to trust the systems and decide to use them in the clinic, the systems should provide understandable and sound explanations of the decision-making process.
Moreover, they could assist clinicians when making medical decisions [17]. When developing new machine learning systems within DED, effort should be made to present the workings of the resulting models and their predictions in an easy to interpret fashion.
Conclusions
We observed a large variation in the type of clinical tests and the type of data used in the reviewed studies.
This is also true regarding the extent of pre-processing applied to the data before passing it to the machine learning models. The studies analysing images can be divided into those applying deep learning techniques directly on the images, and those performing extensive pre-processing and feature extraction before the data is passed to the machine learning model in a tabular format. The number of studies belonging to the first group has increased significantly over the past 3 years. As deep learning techniques become more established, these will probably replace more traditional image pre-processing and feature extraction techniques.
We noted that there was a lack of consensus regarding how best to perform model development, including evaluation. This made it difficult to estimate how well some models will perform in the clinic and with new patients, and also to compare the different models. Comparison was further complicated by the use of different types of performance scores. In addition there was no culture of data and code sharing, which makes reproducibility of the results impossible. For the future, focus should be put on establishing data and code sharing as a standard procedure.
In conclusion, the results from the different studies' machine learning models are promising, although much work is still needed on model development, clinical testing and standardisation. AI has a high potential for use in many different applications related to DED, including automatic detection and classification of DED, investigation of the etiology and risk factors for DED, and in the detection of potential biomarkers. Effort should be made to create common guidelines for the model development process, especially regarding model evaluation. Prospective testing is recommended in order to evaluate whether proposed models can improve the diagnostics of DED, and the health and quality of life of patients with DED.
Disclosure
The authors report no conflicts of interest.
A.1. Performance scores used
If there are two categories available, the task is referred to as binary classification, while more than two categories is referred to as multi-class. For binary classification, the true outcome belongs to one of two categories, e.g., healthy or ill, often referred to as positive (P) or negative (N). A binary classifier assigns new data instances to these two categories, and the prediction can be either true (T), meaning correct, or false The concordance correlation coefficient measures the agreement between two data sets by measuring the variation around the 45 degrees concordance line through the origin [84]. The value ranges between 1 and −1. When the two data sets share mean and standard deviation, the concordance correlation coefficient equals the Pearsons's correlation coefficient. In all other cases, the concordance correlation coefficient will be lower than the Pearson's correlation coefficient. The value is calculated as Concordance correlation coefficient = 2s xy s x 2 + s y 2 + (x − y) 2 , (A. 12) where x and y are the mean values of the two data sets x and y, s x 2 and s y 2 are the variances for each data set and s xy 2 is the covariance between the data sets [84].
Root mean squared error is commonly used for regression problems and represents the difference between the model predictions and the observed values. The value is calculated as Root mean squared error = n i=1 (ŷ i − y i ) 2 n , (A. 13) where n is the number of instances in the data set andŷ i and y i is the model prediction and observed value for instance i, respectively.
The Kappa index measures the agreement between two raters, e.g., the model predictions and labels during classification [85]. It is calculated as where p o is the observed probability of agreement, which equals the accuracy defined in eq. (A.3), and p e is the expected probability of agreement due to chance, defined as where T otal is the total number of instances. The highest possible value is 1, representing perfect agreement, and values above 0.8 are typically regarded as excellent [85]. An illustration of the κ index values for the proportion of correct model predictions is provided in Figure A.5. Cramér's V measures the association between two categorical variables that belong to more than two categories each. When there are two categories for each variable, Cramér's V equals the ϕ coefficient [87]. It is calculated via Min(cat1 -1, cat2 -1) , (A. 16) where χ 2 is the usual chi-squared statistic, n is the number of instances, and cat1 and cat2 are the number of possible categories for each variable. The value ranges from 0 to 1, representing no and perfect correlation between the variables, respectively [88].
In hypothesis testing, the p-value is the probability under a specific model of obtaining test results at least as extreme as those observed, under the assumption that the null hypothesis H 0 is true. H 0 is commonly defined as no difference between two data sets, while the alternate hypothesis H a states that there is a difference. Consequently, a low p-value indicates that the result is not likely under the null hypothesis, and thus strengthens the belief in H a [89].
The Average Pompeiu-Hausdorff distance reflects the distance between estimated values and true values in a metric space [90]. Lower values imply small differences between the two metric spaces. The Pompieu-Hausdorff distance H between the subsets a and b is calculated via H(a, b) = max (H(a, b), H(b, a)) . (A.17) The aggregated Jaccard index is an extension of the global Jaccard index also used to measure the similarities between two sample sets [91]. A high value indicates small differences between the sample sets. The calculation of the aggregated Jaccard index is described by Kumar et al. [92], and Figure A.6 shows a visualisation.
For image segmentation, the support for a segmented area can be calculated as the number of pixels in the segmented area divided by the number of background pixels [40].
A.2. Measuring model uncertainty
Uncertainty estimates are useful in order to evaluate how certain a machine learning model is about the predictions. High uncertainty might suggest that a human expert also should have a look at the instance [93].
Among the reviewed studies, some choose not to use the model predictions of DED when the predicted probabilities are too close to 0.5, reflecting that the model is uncertain [73]. Others report the standard deviation of the model performance scores [12,34,35,62,32,33,40,47,61]. Some computes the confidence intervals for the model performance scores [30,37,72,61,61]. A comprehensive discussion about quantifying uncertainty for medical machine learning models can be found in [93].
|
2021-09-05T19:07:41.671Z
|
2021-09-02T00:00:00.000
|
{
"year": 2021,
"sha1": "a104faa61f69639bca248f5ffccc9acacacf98ab",
"oa_license": "CCBYNCND",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/09/05/2021.09.02.21263021.full.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "163107e1e15197c740a224e5246452a407688483",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
]
}
|
201691232
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Tax Competition, Home Bias and the gain from Non-preferential Taxation Regimes: A case for unilateral commitment
A country has an incentive to unilaterally commit to a non-preferential taxation regime even though the competitor adopts a preferential taxation regime. We show that a mixed taxation regime arises in a dynamic two-period model of tax competition between two symmetric countries where an investor has home-bias for the country where he/she invests in the initial period. A scenario where competing countries jointly adopt non-preferential taxation regimes is also a subgame-perfect equilibrium. The tax revenue of the country which adopts a preferential taxation regime in a mixed taxation regime is equal to the tax revenue a country receives when competing countries jointly adopt a non-preferential taxation regime.
Introduction
This paper is a contribution to the significant theoretical literature that has focused on the comparison of tax revenues generated when countries compete to attract capital using tax rates as a strategic tool. The strategic interaction is analyzed under a preferential regime (where countries set discriminatory taxes based on mobility, nationality, vintage, etc.) and a nonpreferential regime (under which a country is restricted not to set discriminatory tax rates). In recent years, concerned by the perceived harmful effects of such preferential measures adopted by a large number of countries, several international agreements and non-binding resolutions have been adopted by the European Union (EU) and Organization for Economic Cooperation and Development (OECD) to impose restrictions on preferential taxation among member countries and to take joint action against the continuation of preferential taxation regimes by non-member countries. The primary harmful effect motivating such agreements appears to be the erosion of tax revenues and the loss of economic efficiency due to the movement of capital between jurisdictions solely to evade tax payments. 1 Email: kaushal@iiserb.ac.in. Ph: +917496088192. "Competing interests: The authors declare no competing interests". I am thankful to Santanu Roy and seminar participants at ISI Delhi and JPET for their valuable comments and suggestions. I am extremely thankful to two anonymous referees and the editor Dr Rabah Amir for their valuable comments and suggestions. All remaining errors are mine.
While few countries have adopted non-preferential taxation regimes, many continue to follow a discriminatory preferential taxation strategy. In this scenario, it is important to analyze whether a country has an incentive to commit to a non-preferential taxation regime even when its competitors follow the preferential taxation strategy. This also highlights the significance of cooperation between countries to jointly adopt non-preferential taxation regimes. Efforts have been made by researchers to understand the strategic forces at play when countries compete with taxes to attract capital from other jurisdictions under two taxation regimes. There is a vast literature on tax competition 2 and, the effects of coordinated adoption of non-preferential regimes on tax revenues of competing countries 3 . There is an ongoing debate on whether competing countries earn higher tax revenues under a non-preferential or a preferential taxation regime 4 . While the majority of literature on tax competition is static, we look at a dynamic twoperiod model of tax competition between two symmetric countries who compete to attract a single investor in each period, and investments are partially or fully sunk. During the initial period, competing countries simultaneously decide whether to commit to a preferential or a non-preferential taxation regime. We show that a country has an incentive to commit to a nonpreferential taxation regime even when its competitor adopts a preferential regime. When both countries adopt a preferential taxation regime, Bertrand type competition for foreign capital leads to the complete dissipation of tax revenues from foreign capital. When one country adopts a non-preferential taxation regime, competition for foreign capital reduces because the country which adopts a non-preferential taxation regime has to lower taxes on its immobile domestic capital base as well to attract foreign capital. Less competition leads to non-zero tax revenues for both countries in the later period. Because competing countries receive positive tax revenues from the new investor, the overall tax revenue is strictly positive. Because competition in the later period is lower when the initial capital is located in the country with non-preferential taxation, the country with a preferential taxation regime offers larger tax rebates in the initial period and attracts the investor. The country with a preferential taxation regime obtains tax revenue in the later period which is not too small compared to what it can earn if it attracts capital in the initial period. Therefore, a country with a preferential taxation regime is less willing to offer tax subsidies in the initial period. When both countries adopt non-preferential taxation, the competition in the later period is similar because only one country has domestic capital. But the competition during the initial period is more intense because, for both countries, the gain from attracting the capital is higher.
The literature on dynamic tax competition is scarce 5 . Gross, Klein, and Makris (2017) show that when capital is perfectly mobile across countries and lower tax rates during the initial period increase the available stock of capital in the later period, the capital tax rates tend to zero in the long run. In a dynamic fiscal competition model with multiple instruments, Arcalean (2019) finds that tax rates are lower under fiscal competition compared to the outcome under fiscal coordination. Our model is similar to Konrad and Kovenock (2009). They consider an infinite horizon problem where a country with a larger capital stock has agglomeration advantage. In Konrad and Kovenock (2009), both countries are either committed to non-preferential or preferential regimes and, investments are fully sunk. We allow countries to simultaneously chose between preferential or non-preferential regimes and, investments can be fully or partially sunk.
A country can commit to having a preferential or a non-preferential taxation regime by signing an agreement with multinational agencies such as OECD. As discussed in Konrad and Kovenock (2009), our analysis is related to Bertrand markets with subsets of loyal customers which we discuss in more detail later. In Konrad and Kovenock (2009), one of the capital bases is immobile and, the other tax base has cost asymmetry that results from agglomeration advantages. In our model, one of the capital bases is perfectly mobile and the other tax base has cost asymmetry because the investment from the previous period is partially or fully sunk. Janeba and Smart (2003) show that in the absence of the "base effect" (fix capital base), any restrictions on preferential taxation reduce tax revenue. We generalize this result by showing that even in the absence of base effect, restrictions on preferential regimes increase tax revenue. In Janeba and Smart (2003), both capital bases are elastic even for small tax differences between the two countries. In our paper, while one of the capital bases is infinitely elastic, the other capital base is inelastic to the difference in taxes as long as the difference is not large enough.
In our paper, tax competition in the second period relates to the literature on static tax competition between asymmetric countries which analyze possible gains from jointly adopting non-preferential taxation regimes 6 without any coordination on setting tax rates 7 . The results depend on how tax competition is modelled and the composition of tax bases. This paper looks at the competition over two tax bases which are infinitely elastic and differ in their cost of mobility. One of the capital bases has no cost of moving to either country and the other capital base faces a positive cost of mobility to another country 8 . Marceau, Mongrain, and Wilson (2010) find a similar result when one of the capital bases can only locate in one of the competing countries, that is, the cost of relocation to the other country is infinite.
The major contributions of this paper are the followings. First, we show that when two symmetric countries compete to attract foreign investments, a country has an incentive to unilaterally commit to a non-preferential taxation regime 9 . The combined tax revenues of competing countries are higher when one country commits to a non-preferential taxation regime and, the other adopts a preferential taxation regime compared to the case when both countries adopt non-preferential taxation regimes. Second, we extend the result of Haupt and Peters (2005) and Mongrain and Wilson (2015) in a dynamic setting when investors are large. We generalize the result of Haupt and Peters (2005) and show that non-preferential taxation results in larger tax revenues even when only one of the capital bases has a home bias. Moreover, the results hold even when the home bias is small and capital bases are infinitely elastic. While, Wilson (2005) shows that when one of the capital bases is perfectly mobile, and another capital base is imperfectly mobile, a preferential regime generates higher tax revenue compared to a non-preferential taxation regime. We show that the result is opposite when investors are large; a non-preferential regime generates higher tax revenue compared to a preferential regime. We also show that a non-preferential taxation scheme not only generates higher tax revenue in the later period, it also reduces tax subsidies provided to investors during the initial period.
Model
There are two identical countries/jurisdictions indexed by ∈ { , }, who compete to attract capital from the outside their jurisdictions. The economy lasts for two periods, 1 and 2. At the 6 See for instance Haupt and Peters (2005) beginning of period 1; competing countries have no domestic capital 10 . In each period, a single investor enters the market (who owns a unit of capital), who wishes to invest either in country A or country B. For simplicity, we assume that outside the two competing countries the return on capital is equal to 0. Once capital is invested in country A (country B) the return on capital is equal to 1 in each period. If the investor invests in country A (country B) in period 1, he has a home bias for country A (country B). Home bias is captured by the term 0 ≤ ≤ 1. Home bias can also be considered as the cost of capital relocation. If the investor invests in country A (country B) in period 1, then if country B (country A) wish to attract the investor in period 2, it has to undercut the tax rate set by country A (country B) by a margin of F. We assume that competing countries cannot commit to future tax rates. At the beginning of each period, competing countries announce tax rates applicable for that period. At the beginning of period 1, competing countries announce tax rates applicable for period 1. The investor observes tax rates and decides whether to invest in country A (country B) or stay outside. At the beginning of period 2, both governments announce tax rates applicable in period 2. The investor residing outside the two competing countries decides whether to invest in country A or country B. The investor who is already invested either in country A (country B) decides whether to relocate to country B (country A) or remain invested in the initial location. If an investor had invested in country A (country B) in period 1 and decide to relocate to country B (country A), he incurs a cost F.
We analyze this two-period dynamic tax competition game when at the beginning of period 1 competing countries can either commit to a non-preferential taxation strategy, or, a preferential taxation strategy. Under a preferential taxation scheme, a government is free to set different tax rates for domestic and foreign capital. Under a non-preferential taxation scheme, a government is restricted to set an equal tax rate for domestic and foreign capital. In the present scenario, competing countries have no domestic capital at the beginning of period 1. Hence, preferential and non-preferential taxation scheme has different implications only in period 2. If a country receives an investment in period 1, then in period 2, it cannot set different tax rates for the investor who invested in period 1 and the new investor in period 2 under a non-preferential taxation scheme. For simplicity, we assume that governments wish to maximize tax revenue and investors maximize their net return on capital after-tax payments. Even if a government wishes to maximize social welfare, a country would like to obtain maximum tax revenues from foreign nationals. For simplicity, we assume that governments and investors do not discount future income.
The dynamic game we analyze can be described in three stages: Stage one: Both countries simultaneously decide whether to commit to a non-preferential regime or a preferential regime for the entire duration of the game. The same is observed by investors in both periods.
Stage two: At the beginning of period 1, both countries simultaneously announce tax rates applicable for period 1. The maximum tax rate governments can impose is equal to 1. Competing governments can set negative tax rates as well, that is, they can provide tax holidays during the initial period. The investor observes the tax rates and decides whether to invest in country A or country B. Governments receive taxes at the end of period 1.
Stage three: At the beginning of period 2, both countries simultaneously announce tax rates applicable for period 2. As before, the maximum tax rate governments can impose is equal to 1. The country which commits to a preferential taxation regime announces the tax rate applicable to domestic capital (investment from period 1) and the tax rate applicable to foreign capital. The country which commits to a non-preferential taxation regime announces a single tax rate that is applicable for domestic (the investor who previously invested in the country) and foreign capital (the potential new investor). Both investors observe tax rates and make an investment decision. The new investor decides whether to invest in country A or country B. The investor who has previously invested in country A (country B) decides whether to relocate to country B (country A) or remain invested in the initial location. Governments receive taxes at the end of period 2.
The equilibrium concept is the subgame-perfect Nash equilibrium. We do not consider the possibility of a mixed strategy Nash equilibrium at the initial stage when countries chose whether to commit to a non-preferential or a preferential taxation strategy. In the next section, we consider a scenario when both competing countries adopt preferential taxation regimes.
Preferential Taxation
Under a preferential taxation scheme, a country is free to set different tax rates for domestic and foreign capital. First, we look at the outcome in period 2.
Tax Competition in Period Two under Preferential Taxation
Without loss of generality, suppose the investor invests in country A in period 1. Under a preferential taxation scheme, country A sets different tax rates for the domestic investor (the investor who previously invested in period 1) and foreign capital (the new investor who enters the market in period 2). Because country B has no domestic capital, it sets a tax rate for foreign capital. Because the new investor has no cost of relocation to either country, competition between two countries drives down the tax rate to zero. Country A sets the tax rate equal to F on the investor from the earlier period. It is not beneficial for country B to set a tax rate lower than 0 to attract the investor from country A. Therefore, country A retains the investor from period 1 and obtains tax revenues equal to F. Because the tax rate on the new investor is equal to 0, country B does not receive a positive tax revenue in period 2. Lemma 1 states this result formally.
Lemma 1
The equilibrium tax revenues of country A (where the investor invests in period 1) and country B in period 2 are F and 0, respectively. In the unique pure strategy Nash equilibrium, country A sets the tax rates F and 0 respectively, on the investors from period 1 and period 2. Country B sets the tax rate equal to 0 on both investors.
Tax Competition in Period one under Preferential Taxation
From Lemma 1, it is clear that a country that attracts the investor in period 1 also receives a positive tax revenue in period 2. On the other hand, a country that fails to attract the investor in period 1 receives 0 as tax revenue in period 2. Hence, in period 1, competing countries offer a tax subsidy equal to the possible gain in period 2 from attracting the investor in period 1. Lemma 2 states the result. The proof is trivial.
Lemma 2
Competing countries offer a tax subsidy equal to F in period 1. The tax revenue of competing countries is equal to 0.
Non-preferential Taxation
In this section, we analyze the game under a non-preferential taxation scheme. Under a nonpreferential regime, competing countries are restricted to set an equal tax rate on the investor from period 1 and the new investor in period 2. First, we look at the outcome in period 2.
Tax Competition in Period Two
Without loss of generality, suppose the investor invests in country A in period 1. Under a nonpreferential taxation scheme, country A is restricted to set an equal tax rate on the investor who previously invested in period 1 and the new investor. Suppose at the beginning of period 2, country A and country B set the tax rates tA2 and tB2, respectively. The tax revenue of country A in period 2 (TRA2) is: If country A sets the tax rate /0 > 60 + then country B attracts the new investor as well as the investor from country A. If country A sets /0 such that 60 < /0 < 60 + , country B attracts the new investor in period 2 but country A is able to keep its domestic investor because of home bias. When /0 < 60 , country A is also able to attract the new investor as well. Similarly, the tax revenue of country B in period 2 (TRB2) is: Note that country B is a more aggressive competitor in period 2. Country B has to undercut the tax rate of country A by a small margin to attract the new investor. Country B can also undercut country A by a margin of F to attract the investor from country A. We assume that when an investor is indifferent between country A and country B, the investor chooses to invest in country A. As noted in Fisher and Wilson (1995), equilibriums of the game do not change if equality is replaced with inequality in the payoff function. Lemmas 3-6 describe the equilibrium outcomes.
Lemma 3
In the subgame starting at the beginning of period 2, when both countries adopt nonpreferential regimes at the earlier stage, there is no pure strategy Nash equilibrium when F > 0. However, a unique mixed strategy Nash equilibrium exists for all values of F > 0. When F = 0, there is a unique pure strategy Nash equilibrium where both countries set the tax rate equal to 0.
Proof. See Appendix A.
When > 0, a Nash equilibrium does not exist where countries set an equal tax rate. In this case, a country can reduce the tax rate marginally and attract the new investor with probability one. An asymmetric pure strategy Nash equilibrium also does not exist where countries set different tax rates. In this case, a country with a lower tax rate attracts the new investor with probability one. If the investor from period 1 is also initially located in the country with a lower tax rate then it has an incentive to increase its tax rate. If the investor from period 1 is initially located in the country with a higher tax rate, then the country with a lower tax rate has an incentive to increase its tax rate if it is not able to attract the old investor. On the other hand, if the country with a lower tax rate attracts the investor from the competing country, then the other country has an incentive to lower its tax rate to keep its domestic investor. Therefore, neither symmetric nor asymmetric pure strategy Nash equilibrium exists. Given a pure strategy Nash equilibrium does not exist, we analyze Nash equilibrium in mixed strategies.
A mixed strategy Nash equilibrium of this type has many parallels in the literature. The equilibrium can be used to analyze a scenario when some consumers have a switching cost while others can freely choose between different suppliers. Klemperer (1995) considers the case when some consumers have infinite switching cost, that is, they are completely tied with one supplier. Here, we allow consumers to switch suppliers by incurring a certain positive cost of switching. Konrad and Kovenock (2009) consider a scenario when some consumers have infinite switching cost and the others can switch by incurring a positive cost. Therefore, competitors only compete for a fraction of consumers who can switch. In our case, competitors compete for all consumers. It is noteworthy that the equilibrium price can be a lot higher than the switching cost even when the switching cost is small. Wilson (2005) also analyzed a mixed strategy Nash equilibrium when one of the capital bases is perfectly mobile, and the other capital base is completely immobile.
The equilibrium we describe is a special case of mixed strategy Nash equilibrium analyzed in Fisher and Wilson (1995). Fisher and Wilson (1995) consider competition between two firms when two countries impose tariffs, and the total demand is a function of price. When the total demand is constant, and only one of the two countries imposes a tariff then the equilibrium is similar to ours. Our proof of existence and uniqueness of mixed strategy Nash equilibrium follows directly Fisher and Wilson (1995).
Lemmas 4-6 describe the mixed strategy Nash equilibrium for different costs of mobility (home bias) captured by the variable . Equilibrium strategies depend on whether home bias is small, large, or, in an intermediate range. Lemma 4 describes a mixed strategy Nash equilibrium when home bias is relatively large, . . , ≥ 0 @ .
Lemma 4
In the subgame starting at the beginning of period 2 when both countries adopt nonpreferential regimes at the earlier stage, a unique mixed strategy Nash equilibrium exists when 0 @ ≤ . In the mixed strategy Nash equilibrium, tax revenues of country A and country B are 1 and Lemma 4 states that when the home bias is large enough, competing countries receive strictly positive tax revenues in period 2. Equilibrium tax revenues of competing countries do not depend on the home bias. When F is large K ≥ 0 @ L, the mixed strategy Nash equilibrium is similar to Varian (1980) and Narasimhan (1988). When F = 1, the mixed strategy Nash equilibrium is exactly similar to Narasimhan (1988). Country A receives the tax revenue equal to 1 by setting the tax rate equal to 1 on the investor from period 1 and forgo the new investor. Hence, the minimum tax rate country A set is equal to A 0 because even if country A attracts the new investor with probability 1 at a tax rate lower than A 0 , its tax revenue is lower than 1. Note that as long as 0 @ ≤ , the mixed strategy Nash equilibrium remains the same because country B has to set a tax rate lower than 1 − to attract the investor from country A which is too low to be beneficial. The interesting feature of this equilibrium is that both countries receive the new investor with a positive probability and, country A retains the investor who previously invested in period 1. Lemma 5 describes the outcome when F is relatively smaller. Let us define and as and Below we show that 0 < < , and 0 < < 1. Let ∆ be the value of such that 1 − = .
The value of ∆ is approximately equal to 0.54369 which is strictly less than 0 @ .
Lemma 5
In the subgame starting at the beginning of period 2 when both countries adopt nonpreferential regimes at the earlier stage, a unique mixed strategy Nash equilibrium exists when The equilibrium tax revenues of country A and country B are T AVT and , respectively.
Country A has a positive probability mass of at the supremum of its support. The distribution of taxes of country A Bℱ / ( /0 )D and country B Bℱ 6 ( 60 )D are When country A sets a tax rate in the range \ T AVT , 1], it is beneficial for country B to undercut the tax rate of country A by a margin of and attract the new investor and the investor who previously invested in country A. When country A sets a relatively lower tax rate in the range \ , T AVT ], it is not beneficial for country B to undercut the tax rate of country A by a margin of . In this scenario, both countries compete for the new investor. Note that F is equal to ∆ when 1 − = . The support of the mixed strategy Nash equilibrium of country B is disjoint because country A has a probability mass at the supremum of its support. Therefore, when country B lowers its tax rate from T AVT to 1 − , it undercuts country A with a discrete positive probability. Below Lemma 6 describes the equilibrium outcome when F is very small, . . , 0 < < ∆. Let us define = (∆ 0 + ∆). Note that 1 > > ∆> 0. The mixed strategy Nash equilibrium described in Lemma 6 is similar to the one described in Lemma 5. When F is lower, the support of the mixed strategy Nash equilibrium of country B is also not disjoint.
Lemma 6
In the subgame starting at the beginning of period 2 when both countries adopt nonpreferential regimes at the earlier stage, a unique mixed strategy Nash equilibrium exists when 0 < < ∆. In the mixed strategy Nash equilibrium, tax revenues of country A and country B are (1 + ∆) and , respectively. The distribution of taxes of country A Bℱ / ( /0 )D and country B Bℱ 6 ( 60 )D are The intuition behind the mixed strategy Nash equilibrium described in lemma 6 is similar to the one already described in lemma 5. The distributions of taxes described by equations (9) and (10) are continuous when = ∆ 0 + ∆. When country A sets a relatively high tax rate in the range [(1 + ∆) , (1 + ) ], it is beneficial for country B to undercut by a discrete margin to attract the investor from country A and the new investor. When country A sets a lower tax rate in the range [ F, (1 + ∆) ] then it is not beneficial for country B to undercut by a discrete margin to attract the investor who had previously invested in country A. In this scenario, both countries compete for the new investor. Note that there is no probability mass anywhere on the supports of either country. The tax revenue of both countries decreases as F decreases.
Tax Competition in Period One
As evident from lemmas 3-6, the country which attracts the investor in period 1 also receives a larger tax revenue in period 2. Without a loss of generality, suppose country A attracts the investor in period one. Suppose also that the tax revenue of country A and country B in period 2 are /0 and 60 , where /0 > 60 when > 0, and /0 = 60 = 0 when = 0. When = 0, the tax revenue of both countries is equal to 0 in period 2. No country has an incentive to provide a tax rebate in period 1 to attract the investor. Therefore, both countries set the tax rate equal to 0 in period 1. When > 0, competing countries set the tax rate equal to − , where ≡ ( /0 − 60 ) > 0. If a country sets a tax rate greater than − , then the competing country has an incentive to set a lower tax rate to attract the investor in period 1. Similarly, no country has an incentive to set a tax rate smaller than − because even when it is successful in attracting the investor in period 1, the combined tax revenue from two periods is smaller. Lemma 7 describes the outcome in period 1.
Lemma 7.
Starting from the subgame where both countries adopt non-preferential taxation regimes, the equilibrium tax revenue of competing countries is equal to The equilibrium tax rate A in period one is Proof. The proof is evident once we observe that the difference in tax revenue of competing countries in period 2 is 0 when = 0, , and B1 − ∆ 0 D when 0 < < ∆.
Mixed Taxation Regimes
In a mixed taxation regime, one of the competing countries adopts a non-preferential taxation regime and, the other adopts a preferential taxation regime. We observed that when both countries adopt a non-preferential taxation regime, the equilibrium tax revenues of competing countries are strictly positive when the cost of mobility is strictly positive. On the other hand, the tax revenue of competing countries is equal to 0 when both countries adopt a preferential taxation regime. Therefore, competing countries have an incentive to jointly adopt a nonpreferential taxation regime. In this section, we analyze outcomes under a mixed taxation regime.
Under a mixed taxation regime, the outcome of competition in period 2 depends on whether the country with a non-preferential taxation regime or the country with a preferential taxation regime attracts the investor in period 1. If the country with a non-preferential taxation regime attracts the investor in period 1, the outcomes of tax competition in period 2 under a mixed taxation regime and when both countries adopt a non-preferential taxation regime are equal. Moreover, if the country with a preferential taxation regime attracts the investor in period 1, the outcomes of tax competition in period 2 are equal to the outcomes when both countries adopt a preferential taxation regime. Observing these similarities, we omit the discussion of period 2.
Consider a mixed taxation regime where country A adopts a non-preferential while country B adopts a preferential taxation regime. We observed that the outcome of period 2 depends critically on whether country A or country B attracts the investor in period 1. Country B receives F in tax revenue in period 2 when it attracts the investor in period 1. When country B fails to attract the investor in period 1, it receives a positive amount (when F is strictly positive) as tax revenue in period 2 which is strictly less than F. When country A attracts the investor in period 1 its tax revenue in period 2 is larger than when > 0. If country A fails to attract the investor in period 1 its tax revenue in period 2 is 0. As country A gains more in period 2 when it attracts the investor in period 1, it offers a larger tax rebate in period 1 compared to country B, and the investor invests in country A.
Moreover, when both countries jointly adopt a non-preferential taxation regime, the equilibrium tax revenue of competing countries over two periods is equal to the tax revenue a country earns in period 2 when it fails to attract the investor in period 1. Therefore, in a mixed strategy, the tax revenue of a country that adopts a preferential taxation regime is equal to the equilibrium tax revenue of a country when they jointly adopt a non-preferential taxation regime.
Lemma 8 below describes the equilibrium of the game when country A adopts a nonpreferential, and country B adopts a preferential taxation regime.
Lemma 8.
Starting at a subgame where country A adopts a non-preferential taxation regime and, country B adopts a preferential taxation regime, in a unique subgame-perfect Nash equilibrium, country A offers a larger tax subsidy relative to country B and attracts the investor in period 1. The tax revenue of country B is equal to that a country earns when two countries jointly adopt a non-preferential taxation regime. The tax revenues of competing countries are strictly positive when > 0.
Comparison
We observed that when both countries commit to preferential taxation regimes then in a unique pure strategy Nash equilibrium, both countries earn zero as tax revenues. When both countries adopt a non-preferential taxation regime, then in a unique subgame-perfect Nash equilibrium competing countries earn strictly positive tax revenues when the cost of capital relocation is strictly positive. Under a mixed taxation regime, that is, one country adopts a non-preferential and the other adopt a preferential taxation regime, both countries earn strictly positive tax revenues as long as the cost of capital relocation is strictly positive. Moreover, the tax revenue of the country which adopts a preferential taxation regime is equal to what a country earns when both jointly adopt non-preferential taxation regimes.
Suppose two countries jointly adopt non-preferential taxation regimes. If a country deviates and adopt a preferential taxation regime, its tax revenue remains unchanged. Therefore, starting from a scenario where both countries have non-preferential taxation regime, no country has an incentive to deviate and adopt a preferential taxation regime. Now, consider a mixed taxation regime where country A adopts a non-preferential and country B adopts a preferential taxation regime. If country A deviates and adopts a preferential taxation regime. From Lemma 8, we know that under a mixed taxation regime the tax revenue of country A is strictly positive as long as the cost of capital relocation is positive. On the other hand, the tax revenue of country A is zero when both countries adopt a preferential taxation regime. Therefore, country A has no incentive to deviate and adopt a preferential taxation regime. Suppose country B deviates and adopt a non-preferential taxation regime. From Lemma 8, the tax revenue of country B remains unchanged. Therefore, country B has no incentive to deviate and adopt a non-preferential taxation regime.
From the above discussion we conclude that we have two subgame-perfect equilibria of the game. In one of the subgame-perfect equilibria, one country adopts a non-preferential taxation while the competitor adopts a preferential taxation regime. In other subgame-perfect equilibria, competing countries jointly adopt a non-preferential regime. A scenario where two countries jointly adopt a preferential taxation is not a subgame-perfect equilibrium. We do not have a precise prediction whether both countries jointly adopt a non-preferential regime, or, only one of the competing countries adopts a non-preferential taxation regime. Below, proposition 1 describes the subgame perfect outcome of the game. The subgame-perfect Nash equilibria have pure strategies at the initial stage where competing countries decide whether to adopt a nonpreferential or a preferential taxation regime. This is the main result of the paper.
Proposition 1.
The game has two types of subgame-perfect Nash equilibria. In one subgameperfect Nash equilibrium, one country adopts a non-preferential and, the other adopts a preferential taxation regime. In other subgame-perfect Nash equilibrium, both countries jointly adopt non-preferential taxation regimes. In both subgame-perfect equilibria, tax revenues of competing countries are strictly positive when the cost of capital relocation is strictly positive.
Role of Positive Domestic Capital
It is also important to discuss the role of competing countries starting with having no domestic capital. In both countries start with positive domestic capital and, both countries adopt a preferential taxation strategy then the outcome does not depend on the initial capital and, as before, Bertrand type competition for the foreign capital drive down tax revenues from foreign capital equal to zero. Let us consider an interesting case when one country adopts a nonpreferential and, the other adopts a preferential taxation regime. The outcomes of the game do not change when the country with a preferential taxation regime has positive domestic capital except that the tax revenue from domestic capital is added to the total tax revenue. Let us see what happens when the country with a non-preferential taxation strategy has positive domestic capital. The gain to the country in the later period from attracting new investments in the initial period remains the same. But, because the country has positive domestic capital, the cost of offering a tax subsidy to new investors is higher because it sets an equal tax rate for domestic and foreign capital. At the same time, if the country with a non-preferential taxation regime attracts the investor in period 1, then tax revenues of the other country from new investments increase because competition for foreign capital in the later period is smaller. Therefore, the outcome depends on which of the two effects dominate as long as the tax rebate offered to the investor in the initial period is strictly positive, that is, the tax rate in the initial period is negative. A similar argument can also be given when both countries adopt a non-preferential taxation regime. As long as the new capital is perfectly mobile, the gain in period 2 from attracting the new investor in period 1 is the same. The tax revenue in period 2 of the losing country also increases with the size of the domestic capital of the winner. Therefore, the smaller of the two countries are also less willing to undercut the other country during the initial period. The bigger of the two countries are also willing to offer less tax subsidy during the initial period as the size of the domestic capital base increases. Therefore, it is not clear whether a country with a larger capital base attracts the investor during the initial period. But if the size of the domestic tax base is so large that the minimum tax rate a country with a non-preferential regime can set to attract the new investor is strictly positive, that is, the tax rate in period 1 is strictly positive, a country with a preferential taxation regime can undercut the tax rate of the country with a nonpreferential regime and receive positive tax revenues from the investments in the initial period as well. In this scenario, having a preferential taxation regime is better. Therefore, it is reasonable to argue that if competing countries start with a large domestic capital, a country would prefer having a preferential taxation strategy and, it would want its competitor to adopt a nonpreferential strategy.
Conclusion
In a dynamic two-period model of tax competition between two symmetric countries, where an investor has a home bias for the country where he/she invests in the initial period, we show that a country has an incentive to commit to a non-preferential taxation regime even when its competitor adopts a preferential taxation strategy. Moreover, a scenario where both countries jointly adopt non-preferential taxation regimes is also a subgame-perfect Nash equilibrium. Therefore, the model predicts that at least one of the countries adopts a non-preferential taxation regime. The gain from having a non-preferential regime is strictly increasing with home bias as long as the home bias is not large enough. When the home bias is above a critical level, the gain from having a non-preferential agreement is independent of home bias. While the literature on tax competition has identified that "home bias" can make non-preferential taxation preferable to a preferential regime when investors are small with heterogeneous home bias, we show that even when investors are large with a discrete home bias, a non-preferential regime generates higher tax revenue compared to a preferential regime. Moreover, we show that even when only one of the capital bases has a home bias, a non-preferential regime generates higher tax revenue compared to a preferential regime. This paper also quantifies the gain from a nonpreferential regime with a parameter that captures home bias and provides clear comparative statics.
9 Appendix A Proof of Lemma 3. Without a loss of generality, suppose country A attracts the investor in period 1. Suppose there is a symmetric pure strategy Nash equilibrium where both competing countries set an equal tax rate . Note that should be greater than 0 because country A can receive a positive tax revenue by setting a greater tax rate and receive taxes only from its domestic investor. For any > 0, country B would lower its tax rate slightly and attract the new investor with probability one. Hence, there is no symmetric pure strategy Nash equilibrium. Suppose there is an asymmetric pure strategy Nash equilibrium where country A and country B sets / and 6 such that / > 6 > 0. In this scenario, country B attracts the new investor, but it has an incentive to increase its tax rate. Similarly, there is no possibility of a pure strategy Nash equilibrium where 0 < / < 6 .∎ Proof of Lemma 4. The proof follows from Narasimhan (1988). This is also a special of mixed strategy Nash equilibrium discussed in Fisher and Wilson (1995). Without loss of generality, suppose country A attracts the investor in period one. Note that country A can undercut country B by a small margin and attract the new investor. Country B can undercut A by a small margin and attract the new investor. Country B can undercut country A by a margin of and attract the investor located in country A.
Step 1. First, we show that when ≥ 0 @ , country B has no incentive to undercut country A by a margin of . To prove the claim, suppose country A sets the tax rate equal to 1.
Country B can set a tax rate marginally lower than 1 and obtain 1 as tax revenue. Suppose country B undercuts by a margin of and attracts both investors. The tax revenue of country in this case is equal to 2(1 − ) < 1 when > A 0 . It follows that country B has no incentive to undercut by margin of . Therefore, both countries wish to undercut its competitor by a small margin to attract the new investor when ≥ 0 @ .
Step 2. Now we follow Narasimhan (1988) to show that the supports of any mixed strategy Nash equilibrium are convex. Suppose country and country randomize over h * and j * in a mixed strategy Nash equilibrium. First, we show that strategy sets / * and 6 * are convex. We prove by showing that there are no holes in = h * ∩ j * and then showing that there are no holes in l = h * − h * ∩ j * . Let l = inf ( ) and ll = sup ( ). To show that is convex, we show that there are no "holes" in . That is, there is no interval = ( t , u ) such that, for l < t < u < ′′ and ∈ , ∉ . This could happen when one of the countries has support over the interval and the other one does not or when neither country has support over the interval . We show that neither of these two is possible. First note that if the th country sets ∈ with probability zero, then so does the th country. To see this, let A and 0 be defined as A ∈ h * and A = sup{ | < t }, 0 ∈ h * and 0 = inf{ | > u }.
We define = 1 when = , and = 0 when = . Since the tax revenue of the th country, when it charges j = { j . + z1 − h B j D{ j }, is increasing in j for j ∈ , country is better off charging 0 with probability z j ( 0 ) − j ( A ){ and no mass over the set . Now consider the case that neither country is randomizing over the set . The tax revenue of the th country when she charges A is A . + [1 − h ( A )] A . Next consider the tax revenue that would accrue if the th country charges u . The tax revenue equals u . + z1 − h B u D{ u .
But, since h B u D = h B t D = h ( A ), the revenue obtained by charging u are strictly greater than the revenue obtained by charging A , contradicting the assumption of an equilibrium.
No Holes in ′. Here, ′ corresponds to the set of taxes charged by but not by . Once again, we define l = ( ′) and ll = ( ′). Note that either ll < B j * D or l > B j * D. Note that ′ cannot contain any holes. If it did, country could strictly make itself better off by moving the mass from the lower end of the hole to the upper end since by doing so it does not lose the investor but charges a higher tax rate.
Step 3. Next, we show that neither country can have a mass point at the interior or at the lower boundary of the other's support, nor can either country have a mass point at the upper boundary of other's support if that boundary is a mass point for the other country.
Let h l = ( h * ) and h ll = ( h * ). We note that l > 0 since the country should make positive tax revenue in equilibrium. Assume to the contrary that the country sets a tax * , h l ≤ * < h ll with probability . We show that country can increase its revenue by changing strategy.
(A2) Subtracting (A2) from (A1) we get −2 (1 + ) + * z j ( * + ) − j ( * − ){ + z j ( * − ) + j ( * + ){ (A3) For small enough , this is strictly positive for = 1 as well as = 0. This suggest that country , by shifting some mass to the left of * from the right of * , can be made strictly better off, contradicting the equilibrium. Now we consider the case when * = h ll and country has a mass point at h ll equal to h . In this case country can do better by charging ( h ll − ) with probability h and charging h ll with zero probability.
Step 4. Next, we show that the two sets of strategies h * and j * are identical when neither country has a mass point. If country has a mass point at ll , then country will set ll with zero density in equilibrium. That is, country will randomize in the half open interval [ l , ll ) when country has a mass point at ll . To see this, consider the case of no mass points. Assume to the contrary that (without loss of generality) h * ⊂ j * . This implies that an interval exists with country having no support over the interval, but country does. Further, from Step 2, this interval where country has no support is either at the lower or at the upper end. That is, for j ∈ = j * − j * ∩ h * , either j < h l or j > h ll . If the interval is below h l , then firm is strictly better off by charging h l with probability j ( h l ) and not set taxes below h l with positive density.
However, if such an interval exists above h ll , then firm is strictly better off charging h ll with probability z1 − j ( h ll ){ and not setting taxes in this interval. In either case, randomizing over the same set as the rival strictly dominates randomizing over j * .
Assume now that country has a mass point at j ll . It is easy to argue that country is better off setting taxes arbitrary close to j ll .
Step 5. Now we show that ( / * ) = 1. To see this, suppose ( / * ) = ll < 1. The revenue for country A is equal to ′′ because the other country sets the tax rate below ll with probability one. Therefore, country A does better by setting the tax rate equal to 1. Now we show that proposed strategies described by (3) and (4) constitutes a mixed strategy Nash equilibrium.
Step 6. If country A (country B) sets a tax rate ∈ K A 0 , 1L, their expected tax revenue can be represented as We already observed that country A cannot do better by setting a tax rate below Hence, its tax revenue decreases if the tax rate is reduced. ∎ Proof of Lemma 8. We derive a unique subgame-perfect Nash equilibrium of the game starting at the stage where one country adopts a non-preferential, and the other adopts a preferential taxation regime. We show that the country which adopts a non-preferential taxation regime offers a larger tax rebate during the initial period and attract the investor in period 1. This is true for all > 0.
Without a loss of generality, suppose country A adopts a non-preferential taxation regime and, country B adopts a preferential taxation regime.
Firstly, consider the case when ≥ 0 @ . If country B attracts the investor in period one it earns in period two. If country A attracts the investor, country B earns A 0 in period two. Therefore, the maximum tax subsidy country B is willing to offer in period 1 is equal to K − A 0 L. If the investor invests in country B then it pays A 0 over two periods. Country A earns 1 when it attracts the investor in period 1. Country A earns 0 when it fails to attract the investor in period 1. Suppose country A offers a tax subsidy equal to 1 in period 1 and, the investor invests in country A. The maximum tax payments in period two is equal to 1, therefore, the expected tax payment by the old investor over two-period is equal to 0.
The investor will invest in country A as long as the expected payment is less than A 0 . Therefore, country A can offer a tax rebate less than 1 and attract the investor in period one. Therefore, when ≥ 0 @ , the country which adopts a non-preferential regime attracts the investor in period 1.
Second, consider the case when F takes an intermediate value, i.e., ∆ ≤ < 0 @ . If country B fails to attract the investor in period 1, its tax revenue in period 2 is equal to < . If the country attracts the investor in period 1, its tax revenue in period 2 is equal to F. Therefore, the maximum tax rebate country B offers to attract the investor in period 1 is equal to − . If country A attracts the investor in period one, its tax revenue in period 2 is equal to d AVd . If country A fails to attract the investor in period 1, its tax revenue in period 2 is equal to 0. Therefore, the maximum tax rebate country A offers in period 1 is equal to d AVd . We argue that country A offers a tax rebate of less than d AVd in period 1 and attracts the investor. Suppose country A offers a tax subsidy equal to d AVd in period 1. If the investor invests in country A then the combined tax revenue of the two competing countries in period 2 is equal to d AVd + , where the first term is the tax revenue of country A, and the second term is the tax revenue of country B. Note that the minimum tax rate country B sets in the mixed strategy Nash equilibrium in period 2 is equal to d AVd − , which is also equal to the minimum amount the new investor (the investor who enters in period 2) pays as taxes in period 2. Therefore, the maximum tax payments in period 2 by the investor from period 1 is equal to K Third, consider the case when 0 < < ∆. As before, the maximum tax rebate country B is willing to offer in period one is equal to − (∆ + ∆ 0 ) . If the investor invests in country B, it pays (∆ + ∆ 0 ) over two-period. Country A earns (1 + ∆) if it attracts the investor in period one. Suppose country A offers tax subsidy of (1 + ∆) in period one. If the investor invests in country A then its expected payment over two-period is less than (1 + ) − (1 + ∆) ≡ ∆ 0 . Therefore, country A offers a tax subsidy smaller than (1 + ∆) in period one and attracts the investor.
Therefore, we observed that for all values of country A attracts the investor and earns strictly positive tax revenues over two periods. ∎ 10 Appendix B
Proof of Lemma 5:
We need to show that strategies defined by (7) and (8) From (5) and (B6), it is clear that the distribution of taxes over the support of country A is also continuous. The remaining part of the proof we show in two steps. In step 1 we show that competing countries earn an equal amount everywhere on the support. In step 2 we show that a country cannot do better by adopting a different strategy.
Step 1: First, we show that country A earns an equal amount everywhere on the support.
Using (8) (B8) Taking note of the fact that the distribution of taxes of the support of country B is continuous with no probability mass anywhere on the support, we can conclude using (B7) and (B8) that country A obtains an equal tax revenue everywhere on its support.
Similarly, we show that country B obtains equal tax revenues everywhere on its support.
When country B sets ∈ K∅, ∅ AV∅ L, its tax revenue is given as When country B sets the tax rate ∈ K ∅ AV∅ − , 1 − L, its tax revenue is From (B9) and (B10), it is clear that country B earns an equal tax revenue everywhere on the support.
Step 2. Now we prove that no country can do strictly better from unilateral deviation. Note that country B does not set taxes such that 1 − < < ∅. Hence, if country A deviates and sets a tax rate such that 1 − < < ∅, then it is not undercutting the tax rate of country B with a greater probability but still setting a lower tax rate. Suppose country A deviates and sets a tax rate such that , which is not true. Therefore, we conclude that country A cannot do better from a unilateral deviation.
Now we show that country B has no incentive to deviate from proposed strategy unilaterally. Following arguments similar to above, it is easy to see that country B cannot do better by setting a tax rate such that 1 − < < ∅. We need to check for ∈ K ∅ AV∅ , 1L and < ∅ AV∅ − . From (7), the tax revenue of country B for ∈ K ∅ AV∅ , 1L is equal to (B16) Using (7), the tax revenue described in (B16) can be represented as Differentiating (B17) with respect to we obtain 1 + ∅Q (XUQ) Z which is greater than zero, that is tax revenue is increasing in . From (B16) and (B17), it is clear that the tax revenue of country B is decreasing in its tax rate if the tax rate is greater than ∅ AV∅ , and the tax revenue is increasing in its taxes when it lower than ∅ AV∅ − . This proves that country B cannot do better by unilaterally deviation. Using the argument in Appendix C, we state this equilibrium is unique. ∎ Proof of Lemma 6. Here we show that the proposed strategies constitute a mixed strategy Nash equilibrium. The existence and uniqueness of the equilibrium follow directly from Fisher and Wilson (1995). The sketch of the proof is provided in an online appendix (Appendix C).
First, we show that the distribution of taxes of competing countries are continuous over the support. Distribution of taxes over the support of country A for taxes over the range [ , (1 + ∆) ] and [(1 + ∆) , (1 + ) ] is given by (9). Distribution of taxes over the support of country B is given by (10). From (10) The remaining part of the proof we show in two steps. In step (1), we show that competing countries receive an equal tax revenue everywhere on the support. In step (2), we show that competing countries cannot do better by unilateral deviation from the proposed strategies.
Step (1): Suppose country A sets the tax rate /0 in the range B(1 + ∆) , (1 + ) D. The expected tax revenue is equal to /0 [1 − ℱ 6 ( /0 − )]. Note that in this case /0 − ∈ (∆ , ). Using (10), the expected tax revenue of country A is: Similarly, when country A sets /0 ∈ ( , (1 + ∆) ), the tax revenue is equal to (9), the tax revenue is represented as From (B25) and (B26), it is clear that country B earns an equal tax revenue everywhere on the support. Now in step (2), we show that no country can do strictly better from unilateral deviation.
Step (2): First, we show that country A do not find it beneficial to set a tax rate outside the support. Suppose country A sets a tax rate greater than (1 + ) . Using (10), we can state that the expected tax revenue of country A at the tax rate /0 is equal to From (B27), it is clear that country A cannot do better by setting a tax rate higher than (1 + ) . Now, suppose country A sets a tax rate /0 which is lower than the infimum of the support. Note that if it sets a tax rate lower than ∆ , then the maximum tax revenue it can obtain is equal to 2∆ , which is less than equilibrium tax revenues. Hence, we only need to verify that tax revenue of country A for /0 > ∆ . From (10), the tax revenue of country A in this case is equal to /0 = /0 + /0 [1 − ℱ 6 ( /0 )] = /0 + /0 \ "… YZ VX YZ From (B27) and (B28), it is clear that country A cannot do better if it sets a tax rate outside the proposed support for the mixed strategy Nash equilibrium. Now, we show that country B cannot set a tax rate outside its support and do strictly better. Suppose country B deviates and sets a tax rate which is lower than the infimum of the support. If it sets 60 ≡ ( − ) or less, it can attract both investors with probability one, but it earns negative tax revenues. Thus, we concentrate on the range of taxes at which country B attracts the new investor with probability one and attracts the investor residing in country A with a positive probability, that is 60 ∈ (0, ∆ ). In this case, the tax revenue of country B equals 60 ≡ 60 + 60 [1 − ℱ / ( 60 + )]. Using (9), we can represent the same as Now, if country B sets a tax rate above the supremum of the support of country A, it gets tax revenues equal to 0. Suppose country B sets a tax rate 60 which is greater than the supremum of the support of country B but less than the supremum of country A, that is 60 ∈ B(1 + ∆) , (1 + ) D. The tax revenue of country B is equal to 60 ≡ From (B29) and (B30), it is clear that country B cannot do strictly better by setting a tax rate outside its support for proposed mixed strategy Nash equilibrium. ∎ 11 Appendix C (Should be online) Following Fisher and Wilson (1995), we prove the uniqueness of mixed strategy Nash equilibrium. Let h ( ) be the decumulative distribution function of the tax rates charged by country , that is h ( ) = 1 − h ( ) where h ( ) is the distribution function. The function h ( ) may not be continuous. Let h ( ) denote the probability mass of h ( ) at the tax rate . Without a loss of generality, we assume that country A attracts the investor in period one. h ( ) is the probability that country sets a tax rate greater than or equal to . Then (C1) Is the expected tax revenue of country A from setting the tax rate equal to . 6 ( − ) is the probability that country B sets the tax rate greater than ( − ). The 6 ( − ) reflects the assumption that country A keeps its domestic investor with probability 1/2 when both sets the same tax rate. Similarly, 6 ( ) is the probability that country B sets the tax rate greater than so that country A attracts the new investor in period two. The term Note that > 0 inures positive revenues for country A. Therefore, the support of taxes chosen by country A must be bounded away from zero.
Proof. Lemma 2 implies that country A can have a probability mass at 1, and country B cannot have a probability mass anywhere on the support. Suppose country A has a probability mass of > 0 at < 1. Then there is a δ > 0 such that 6 ( − + ) = 6 ( − ) and 6 ( + ) = 6 ( ). Because each country earns positive revenues 6 ( − ) > 0 and 6 ( ) > 0. Then from (C1), we can observe that the expected revenues of country A is greater at ( + ) compared to . Note that when country A has a mass point at > 0, country B cannot have a mass point because it can reduce its tax rate by a small margin and increase tax revenues. If country A has a mass point at = 1, then it cannot increase its tax rate. Therefore, we conclude that country A can only have a probability mass at = 1.
It follows from the above discussion that country B cannot have a mass point at ≠ 1. Now we will show that country B cannot have a mass point at = 1. It is not beneficial for country B to have a mass point at 1 if country A also has a mass point at 1. If country A has no mass point then country B is not undercutting the tax rate of country A with a positive probability. Therefore, country B can reduce its tax rate and do better. Therefore, we conclude that country B cannot have a mass point anywhere on its support. The proof is complete. ∎ (b) For > ( 6 + ), the expected revenues of country A is zero because it loses its domestic investor and the new investor with probability one. Using a similar argument, we can show that 6 ( / ) = 0.
(c) We know that 6 ( / − ) > 0, which implies / − ≤ 6 ≤ 1. To see the second inequality, suppose to the contrary that / < 6 . The expected revenues of country B near 6 is equal to zero. This contradicts Lemma 1. (d) Suppose to the contrary that / > 6 + . This implies 6 < / − . Country B attracts both investors with probability one for any tax rate below / − . Therefore, tax revenues of country B is increasing in the tax rate below / − . Therefore, it will not set a tax rate below / − . To prove the second inequality, suppose to the contrary that 6 > / . For any tax rate below 6 , tax revenues of country A is increasing in . Therefore, country A does not set a tax rate below 6 . ∎ Part (a) states that country B also sets taxes such that it is not undercutting country A's tax rate with probability one. Part (b) states that country A does not set a tax rate so high that it loses its domestic investor with probability one. Moreover, country B does not set a tax rate greater than the supremum of the support of country A.
Proof. Suppose country sets h then it does not attract the new investor. If it sets h − then it also attracts the new investor. Therefore, it makes sense to set h only if h ≥ 2( h − ) →. h ≤ 2 .
Using the assumption that h < h − , we now establish a lower bound for h .
Proof. First, we prove it for = . As before, we define h • B , j D and h ž B , j D as equilibrium tax revenues from old investments (investments from period one) and new investments, respectively. Therefore, h B , j D = h • B , j D + h ž B , j D. Since equilibrium tax revenues is strictly positive, it follows that / ( / , 6 ) − / ( / + , 6 ) ≥ 0.
Similarly, we can show that / + < 6 ≤ / . The proof is complete.
|
2019-08-24T21:36:17.432Z
|
2021-11-05T00:00:00.000
|
{
"year": 2021,
"sha1": "7effa647dc13428e2b4454471584356bdd25f43c",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1052559/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bed3ad37e45582890d84ba53b36b3b0ddf532f38",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
13629396
|
pes2o/s2orc
|
v3-fos-license
|
Dendritic Cell Response to HIV-1 Is Controlled by Differentiation Programs in the Cells and Strain-Specific Properties of the Virus
Dendritic cells (DCs) are potent antigen-presenting cells that might play contradictory roles during HIV-1 infection, contributing not only to antiviral immunity but also to viral dissemination and immune evasion. Although DCs are characterized by enormous functional diversity, it has not been analyzed how differentially programmed DCs interact with HIV-1. We have previously described the reprogramming of DC development by endogenously produced lactic acid that accumulated in a cell culture density-dependent manner and provided a long-lasting anti-inflammatory signal to the cells. By exploiting this mechanism, we generated immunostimulatory DCs characterized by the production of TH1 polarizing and inflammatory mediators or, alternatively, suppressed DCs that produce IL-10 upon activation, and we tested the interaction of these DC types with different HIV-1 strains. Cytokine patterns were monitored in HIV-1-exposed DC cultures. Our results showed that DCs receiving suppressive developmental program strongly upregulated their capacity to produce the TH1 polarizing cytokine IL-12 and the inflammatory chemokines CCL2 and CCL7 upon interaction with HIV-1 strains IIIB and SF162. On the contrary, HIV-1 abolished cytokine production in the more inflammatory DC types. Preincubation of the cells with the HIV-1 proteins gp120 and Nef could inhibit IL-12 production irrespectively of the tested DC types, whereas MyD88- and TRIF-dependent signals stimulated IL-12 production in the suppressed DC type only. Rewiring of DC cytokines did not require DC infections or ligation of the HIV-1 receptor CD209. A third HIV-1 strain, BaL, could not modulate DC cytokines in a similar manner indicating that individual HIV-1 strains can differ in their capacity to influence DCs. Our results demonstrated that HIV-1 could not induce definite and invariable modulatory programs in DCs. Instead, interaction with the virus triggered different responses in different DC types. Thus, the outcome of DC-HIV-1 interactions might be highly variable, shaped by endogenous features of the cells and diversity of the virus.
inTrODUcTiOn Due to their profound functional diversity, dendritic cells (DCs) can potentiate various types of immune reactions ranging from inflammatory TH1 responses against intracellular pathogens to the establishment of antigen-specific tolerance. Such functional plasticity can be exploited by pathogens, including HIV-1, to achieve immune evasion (1). DCs can bind, preserve, and transfer infective virions to CD4 + lymphocytes, which might facilitate HIV-1 dissemination both at the mucosal sites of infection and inside peripheral lymphoid tissues (2). In addition, rewiring DC functions might help HIV-1 to dampen antiviral immunity, and, indirectly, it can also decrease responses against non-HIV-related antigens, potentially influencing the outcome of vaccinations or immunotherapies in HIV-1-infected individuals.
Dendritic cells are equipped with an array of delicate pattern recognition receptor (PRR) systems for invading pathogens, as exemplified by the high number of molecules binding HIV-1 or viral compounds including lectins such as CD209 (DC-SIGN), SIGLEC1, mannose receptor or DCIR, the CD4 molecule, the TAM receptors or sensors for viral nucleotides including cGAS in the cytosol, and toll-like receptors (TLRs) in the endosomes. In response to PRR activation, DCs upregulate costimulatory and MHC molecules, which facilitate antigen presentation, and migrate to peripheral lymphoid tissues producing soluble factors that increase inflammation and regulate the differentiation of helper T cells. This scenario might be altered upon encountering HIV-1 as DCs can be either directly infected by the virus, albeit at a relatively low efficiency, or the cells can also be affected in a bystander manner by the binding of various HIV-1-derived compounds (1). Infection of DCs has been shown to increase the costimulatory potential and the ability of the cells to induce T cell activation through an autocrine loop of type-I IFNmediated DC activation (3,4). Upregulation of costimulatory molecules in HIV-1-treated DCs has been detected in several studies (5)(6)(7); however, it has also been demonstrated that the infection of DCs by HIV-1 inhibited the production of IL-12, the key cytokine supporting TH1 responses (5). Both the HIV-1 envelope protein gp120 and the viral protein R have been implicated in the inhibition of IL-12 production (8,9), and additionally both compounds contributed to an increased production of the immunosuppressive cytokine IL-10 (9,10). Controversially to the aforementioned findings, it has also been shown that HIV-1 infection inhibits costimulatory molecule expressions in DCs (11) and HIV-1 binding to CD209 molecules increased IL-12 gene expression in activated DCs (12).
Such often-contradictory findings on HIV-1-mediated stimulatory and inhibitory signals in DCs potentially reflect the concomitant activation of antiviral immune mechanisms and HIV-1-specific immunosuppressive signals in the cells. Nevertheless, diversity in the experimental systems and virus preparations might also contribute to variable results. In this study, we decided to evaluate the impact of DC heterogeneity on responses to HIV-1. We exploited an endogenous, lactic acidmediated mechanism in developing DC cultures to generate DCs with strong inflammatory and T cell stimulatory potential or, alternatively, suppressed DCs characterized by robust IL-10 production (13,14) and we tested the interaction of these cells with HIV-1. Our results indicated radically opposing responses in the two DC types upon encountering HIV-1. The virus strains IIIB and SF162, although presented little infectivity in DC cultures, strongly upregulated the secretion of IL-12, CCL2, and CCL7 in suppressed DCs, whereas these virus strains abrogated cytokine production in the more immunostimulatory DC types. HIV-1 BaL, on the contrary, had no impact on cytokine production indicating that strain-specific features might also influence DC-HIV interactions. Our results thus indicated a previously unnoticed high level of complexity in HIV-1 DC interactions, where DC endogenous mechanisms determined largely the response to virus binding. These findings highlighted the need for more in-depth studies on HIV-1 interactions including different in vivo existing DC populations and variable virus strains, to understand better the role of DCs in HIV-1 pathogenicity.
MaTerials anD MeThODs generation of Monocyte-Derived Dcs
The study was performed in accordance to ethical permit approved by the ethical committee at Karolinska Institutet. Blood samples (buffy coats) from healthy donors were collected at the Karolinska Hospital. Ethical permission was needed to use human cells for our study but consent from blood donors about the specific purpose of the experimental work using these buffy coats was not required. Monocytes were isolated from peripheral blood mononuclear cells (PBMCs) using CD14 microbeads (Miltenyi Biotec, Bergisch Gladbach, Germany) after Ficoll gradient centrifugation. Monocytes were cultured at cell culture concentrations of 2 × 10 6 cells/ml or 0.2 × 10 6 cells/ml in the presence of 50 ng/ml IL-4 (Peprotech, London, UK) and 75 ng/ml GM-CSF (Gentaur, Kampenhout, Belgium) in RPMI 1640 medium supplemented with antibiotics and 10% FCS (Life Technologies). By using dense and sparse cultures, we utilized a cell culture density-dependent differentiation switch in the developing cells and generated DCs with unique cytokine profiles [(13); Figure S1 in Supplementary Material]. On day 3, the cells were collected and counted using trypan blue exclusion. For DC activation, 250 ng/ml LPS (Invivogen, CA, USA) was used. HIV-1 envelope glycoproteins and Nef were obtained from the NIH AIDS reagent program, and Mycobacterium tuberculosis mannosilated lipoarabinomannan (ManLAM) was kindly provided by Andrzej Pawlowski, Lund Universiy, Lund, Sweden. Endotoxin contamination was tested using the THP-1-XBlue-MD2-CD14 bioassay system (Invivogen).
hiV-1 Propagation and Treatment of Dc cultures
The virus strains SF162, IIIB, and BaL were propagated in PBMC cultures activated by 2.5 μg/ml phytohemagglutinin (PHA) and 10 U/ml IL-2 (both from Sigma-Aldrich, St. Louis, MO, USA). Virus-free control supernatants were also generated using the same PBMC culture conditions. Virus and control preparations were concentrated 50× and thereafter washed in 50× volume PBS, using 100 kDa MW centrifugation filters (Merck Milipore, Billerica, MA, USA). Tissue culture 50% infectious doses (TCID50) were determined using PBMC cultures activated by PHA and IL-2 and treated with serial virus stock dilutions in six replicates. HIV-1 infection was monitored in these cultures on day 7, following addition of the virus, using p24 ELISA (Biomerieux, Marcy Letoile, France). TCID50 was calculated using the Spearman and Karber algorithm (15). For treatment of DC cultures, 100 TCID50 of each virus isolate was used. The cells were treated for 24 h with the viruses or respective control preparations followed by removal of supernatant and activation with LPS for additional 24 h. For detailed fractionation of the HIV-1 and control supernatants, we first used 300-kDa centrifugation filters (Sigma-Aldrich), followed by the subsequent centrifugation of the flow-through fractions using 30-kDa filters. In some experiments, DCs were treated with HIV-1 in the presence of 2.5 μg/ml AZT (Sigma-Aldrich) or inhibitors of MyD88-and TRIF-mediated signals (Invivogen) used in the concentration of 25 μM.
Dc infection experiments
Dendritic cells were treated with the virus or control preparations for 24 h, then washed, and cultured for 7 days in the presence of GM-CSF and IL-4. Allogeneic CD4 + T cells were enriched from buffy coats using the CD4 + T cell isolation kit (Miltenyi Biotec) and added to some of the cultures in 1:3 (DC:T cell) ratio together with 10 U/ml IL-2 (Sigma-Aldrich). To detect DC infection, intracellular stainings were performed with the anti-p24 KC57-RD1 antibody (Beckman Coulter, Brea, CA, USA), using 4% paraformaldehyde fixative and Perm/Wash buffer (BD Biosciences), and the samples were analyzed using flow cytometry.
Flow cytometry
FITC-labeled anti-CD80, PE-labeled anti-CD86, PE-Cy5-labeled anti-CD83, and APC-conjugated anti-CD209 and anti-CD95 antibodies were obtained from BD Pharmingen (San Diego, CA, USA). Dead cells were stained using the Live/Death detection kit with a near-infrared dye (Invitrogen, Carlsbad, CA, USA). The samples were analyzed using CyAn ADP Analyser (Beckman Coulter, Brea, CA, USA), and the data were analyzed using FlowJo version 9.2 (Tree Star Inc., Ashland, OR, USA). cD209 silencing CD209-specific and control siRNA were obtained from Applied Biosystems. Electroporations were performed in opti-MEM medium (Invitrogen) in 4-mm cuvettes (Biorad, Hercules, CA, USA) using the GenePulser X cell from Biorad. The cells were then cultured for 48 h in the presence of siRNA, and CD209 expression was analyzed using flow cytometry.
statistical analysis
We used parametric (one-way ANOVA with Tukey's posttest) and non-parametric (Kruskal-Wallis test, Dunn's posttest) statistical tests when comparing relative cytokine expressions and paired t-test or Wilcoxon signed-rank test to compare absolute concentrations, depending on the distribution of the variables. Statistical analyses were performed using Prism (version 5.0a, GraphPad Software Inc., San Diego, CA, USA).
hiV-1 Promotes Unique Functional responses in Different Dc Types
To analyze how differentially programmed DC types respond to HIV-1, we utilized a previously characterized cell concentration-and lactic acid-dependent mechanism that allowed us to obtain monocyte-derived inflammatory DCs (DC inf ) producing high levels of the TH1 polarizing cytokine IL-12 together with the inflammatory mediators TNF, CCL2, and CCL7 or, alternatively, suppressed DCs (DC sup ) that secreted high amounts of IL-10 upon activation (13) (Figure S1 Supplementary Material). DC-HIV interactions have so far been primarily studied using human monocyte-derived DCs, and therefore, the functional variability achieved in this cell type provided us with an experimental platform that is both comparable and relevant in light of previous findings. Preincubation of the different DC types with the HIV-1 strains SF162 and IIIB (an R5 and X4 strain, respectively) induced substantial reprogramming in cytokine production triggered by the TLR4 ligand LPS (Figures 1A-C). DCs developing in dense cultures acquired a suppressed phenotype during their differentiation; however, in response to HIV-1 exposure, these cells strongly increased their potential to produce IL-12 and the inflammatory chemokines CCL2 and CCL7 (Figures 1A,B). We have also observed a tendency of decreased IL-10 production in HIV-1 treated samples, although this effect did not reach statistical significance. On the contrary to DC sup , the production of IL-12, CCL7, TNF, and IL-6 was dampened by HIV-1 in inflammatory DCs, which were originally characterized by secretion of high level of these mediators (Figures 1A,C). IL-10 production by DC inf decreased also by HIV-1 pretreatments, suggesting similar virus-mediated effects on both inflammatory and suppressive mediators. As opposed to the rewiring of DC cytokines, the activation markers CD80, CD86, CD83, and CD95 were not or only modestly affected by HIV-1 preincubation on the different DC types (Figure 1D). In fact, CD83 and CD86 were expressed at a slightly lower level on some of the IIIB pretreated DC inf following the LPS-induced activation, whereas SF162 had no effect on any of the tested markers.
The opposing effects of HIV-1 on the cytokine profile of the two functionally different DC lineages might indicate fundamental differences in the HIV-activated receptors and signaling processes. Nevertheless, HIV-1 induced a modest but consistent IL-6 and IL-10 production by both DC sup and DC inf , but no secretion of IL-12, suggesting at least partially overlapping immediate responses in the two DC types (Figure 2). To analyze how DCs with unique functional profiles interact with HIV-1, we generated inflammatory DCs, with the capacity to produce high levels of IL-12, CCL2, CCL7 and TNF upon activation or, alternatively, suppressed DCs, characterized by IL-10 production, and we exposed these cells to the HIV-1 strain IIIB or SF162 or a virus-free control preparation for 24 h. Thereafter the DCs were activated by 250 ng/ml LPS in fresh medium for 24 h, and cytokine levels were analyzed in the supernatants. Cytokine concentrations (mean ± SD, calculated from triplicate wells) are shown in one representative experiment, using IIIB (a). Alternatively, cytokine levels detected in virus-treated cultures are expressed following normalization with levels observed in control samples. The symbols represent individual experiments performed with DC sup (B) or DC inf (c). We analyzed how preincubation of the different DC types with IIIB and SF162 HIV-1 strains influence the expression of CD80, CD86, CD83, and CD95 molecules in LPS-activated DCs, using flow cytometry (D). Representative results of two independent experiments are shown (*p < 0.05, **p < 0.01, and ***p < 0.001). Interestingly, although the HIV-1 strains SF162 and IIIB could both induce vigorous changes in DC cytokine production, by boosting IL-12 production in suppressed DCs and inhibiting IL-12 in immunostimulatory DCs, a third HIV-1 strain, BaL, had no effect on IL-12 levels under the same experimental conditions (Figure 3A). HIV-1 BaL has been frequently studied in DC infection experiments, and a higher infectivity has been demonstrated for BaL, compared to IIIB, in DC cultures (16). Therefore, we tested whether the susceptibility of DCs to infection by different HIV-1 strains could be linked to a differential ability of these strains to modulate cytokine production. HIV-1 infection, monitored by measuring the capsid protein p24, was detected in very few (typically <1%) of the SF162-treated DCs, using a virus concentration that efficiently modulated IL-12 production in previous experiments, and in a slightly larger but still minor population of BaL-treated cells (Figures 3B,C). Similar to SF162, the infection of both DC types remained <1% in cultures treated with IIIB (n = 3, data not shown). In the presence of allogeneic CD4 + T lymphocytes the infection of DCs increased strongly, reaching >10% of the DCs in case of BaL, demonstrating functionality of the studied virus preparations. These results indicated that the modulation of DC cytokines by the SF162 and IIIB strains did not require a productive infection in DCs, and the rewiring of cytokines can occur in a bystander manner at subinfectious viral levels. To further confirm this hypothesis, we treated DC sup and DC inf with HIV-1 IIIB in the presence or absence of 2.5 μg/ml AZT (zidovudine), an inhibitor of reverse transcriptase. Notably, HIV-1 similarly modulated the IL-12 production of DC sup and DC inf in the presence or absence of AZT, suggesting further a bystander effect on the cells (n = 3, data not shown).
structural requirements for hiV-1-Mediated Modulation of the Different Dc Types
To better understand how HIV-1 modulates the different DC types, we utilized a size-based fractionation of the HIV-1 preparations to separate smaller molecular components from virus particles or large macromolecular complexes and we tested the effects of these fractions individually in DC cultures. Treatment of the cells with the molecular weight fraction >300 kDa resulted in similar effects, i.e., stimulation of LPS-induced IL-12 production in DC sup and inhibition of DC inf , as the unfractionated HIV-1 supernatants, whereas molecular components in the range between 30 and 300 kD induced an inhibitory signal on DC inf , but possessed no stimulatory effects (Figure 4A). Molecules of even smaller size (<30 kDa) had no effect on IL-12 production. These results suggested that the DC modulatory signals are carried by larger components, which can include viral particles, microvesicles associated with viral replication or larger molecule complexes. In addition, soluble viral proteins also contributed to DC inhibitory signals.
In the next set of experiments, we analyzed the role of CD209, one of the major HIV-1 receptors in DCs, which has demonstrated roles in both DC suppression and increased IL-12 transcription upon HIV-1 binding (12,17). We downmodulated CD209 on the surface of immature DCs using siRNA technology ( Figure 4B) before exposing DC sup and DC inf to HIV-1. Interestingly, the lack of CD209 had no effect on the viral inhibition of DC inf or on stimulation of IL-12 production in DC sup (Figure 4C). In fact, levels of IL-12 were slightly elevated in the absence of CD209 in HIV-1 pretreated DC sup , suggesting a modest suppressive role for CD209 in IL-12 regulation. These results indicated that the virus-induced CD209 signaling might not be responsible for the differential regulation of cytokine production observed in the two DC types. The viral gp120 proteins play essential roles in binding a wide range of target cell receptors, and these molecules can efficiently modulate DC functions (18). We analyzed whether DC binding by gp120 could be sufficient to modulate IL-12 production in DC sup and DC inf by exposing these cells to recombinant gp120 and gp140 molecules representing different X4 and R5 HIV-1 strains before triggering IL-12 production by LPS (Figure 5A).
In the same assay, we included purified ManLAM component of Mycobacterium tuberculosis that is characterized by similar binding specificity to CD209 as the HIV-1 gp120 (17,19). The results indicated clear inhibitory signals elicited by gp120 proteins of the strains 96ZM651 and 93TH975 and by ManLAM on IL-12 production in both DC types, whereas the other tested gp120 and gp140 constructs showed no effect on IL-12 production ( Figure 5A). Differences in amino acid sequence, conformation, or glycosylation could potentially contribute to a more or less efficient DC modulation by the various recombinant proteins. Endotoxin contamination of the recombinant proteins, on the other hand, was ruled out with the help of a THP-1-XBlue bioassay system (detection limit 0.05 EU/ml). The HIV-1 protein Nef has been described to modulate DC functions acting in a bystander manner or, alternatively, within the infected cells (20). Similar to gp120, preincubation of DCs obtained from dense or sparse cultures with recombinant Nef resulted in a profound inhibition of the ability of the cells to produce IL-12 in response to a later LPS activation ( Figure 5B).
HIV-infected cells release Nef in microvesicles that co-exist with viral particles in the HIV-1 preparations and that can be internalized by other cells (21)(22)(23). In addition, myeloid cells could be particularly efficient in release Nef, even in the absence of profound viral replication (23), suggesting a potential role for endogenous Nef released by the low number of infected DCs in our culture system. Thus, the envelop protein gp120 and Nef might both contribute to IL-12 inhibition in DCs; however, these molecules may not be sufficient for delivering the signals that upregulate IL-12 production in HIV-1-treated DC sup . The endosomal TLRs, TLR3, TLR7, and TLR8 can act as sensors of viral RNA, and these receptors have all been implicated in HIV-1 recognition (24)(25)(26)(27)(28)(29). We decided to study the potential contribution of TLR-mediated pathways in the modulation of DC sup and DC inf by HIV-1, with the help of peptide inhibitors that transiently interfere with the dimerization of the MyD88 adapter proteins involved in TLR7 and TLR8 signaling or with the interaction of TLR3 with the TRIF adapter protein.
Interestingly, inhibitors of MyD88-and TRIF-signaling could selectively block the stimulatory effects of HIV-1 on IL-12 production of DC sup without interfering with the HIV-1-mediated suppression of DC inf (Figure 5C). These results suggest a concerted action of TLR3 with the MyD88-associated receptors TLR7 and/or TLR8 in the stimulation of DC sup by HIV-1. On the other hand, the same pathway could not stimulate DC inf , which might indicate differences in HIV-1 uptake and endosomal transportation in the two DC types.
DiscUssiOn
Modulation of cytokines and costimulatory molecules by HIV-1 has been extensively documented in DCs suggesting altered functions of these cells during HIV-1 infection, which might contribute to viral immune evasion and, somewhat controversially, to an increased immune activation (1,(3)(4)(5)(6)(8)(9)(10)(11)(12). The description of both inhibitory and stimulatory signals induced by HIV-1, together with the several alternative interpretations for these events, make it complicated to envisage a generalized model for the contribution of DCs in HIV-1 infection and in the following disease progression. In addition, DCs receive unique combination of exogenous differentiation signals and tissue-specific regulatory factors, which can influence their interaction with pathogens. We have previously shown that DC differentiation can be skewed in vitro by endogenously produced lactic acid, which accumulated in dense cultures and provided a strong and long-lasting anti-inflammatory stimulus to the cells (13). DCs developing in sparse cultures, on the contrary, avoided the lactate-mediated suppression and produced high levels of inflammatory cytokines, migrated toward lymphoid tissue-derived chemokines, and stimulated the differentiation of TH1 cells (13,14). This system allowed us to analyze in this study how DCs with unique functional programming respond to HIV-1 encounters. We showed that HIV-1 binding induced a robust reprogramming of the cytokine-producing abilities in DCs, and, surprisingly, the stimulatory or inhibitory nature of the HIV-1 effects was determined by endogenous characteristics of the tested DC types and not by viral compounds. DCs receiving a suppressive developmental program strongly upregulated their potential to produce IL-12, CCL2, and CCL7 in response to HIV-1 exposure, whereas DCs that were already characterized by the production of high levels of these mediators downregulated their cytokine production in response to HIV. Further studies are required to clarify the mechanisms behind the different types of DC responses in the presence of HIV-1; however, our experiments have already revealed several features of the strong DC modulatory effect of HIV-1. The ability of HIV-1 to rewire cytokine production in conditions, which do not allow productive DC infections suggest an effective bystander regulation of DCs and consequently impaired immune responses, even at very low level of virus replication. In our studies, CD209 played a negligible role in cytokine regulation, in spite of the previously demonstrated modulation of IL-12 and IL-10 through CD209-binding ligands (12,30), suggesting the presence of other important viral pathways acting on DC cytokines. In addition, stimulatory effects of HIV-1 strains IIIB and SF162 on the IL-12 production were not recapitulated by using recombinant gp120 molecules representing the envelope proteins of several HIV-1 strains, including SF162, indicating that the envelope-DC interaction, in itself, might only contribute to inhibitory signals in DCs. Similarly, preincubation of the cells with recombinant Nef resulted in a reduced IL-12 production, irrespective of the DC phenotype. IL-12 upregulation in DC sup appeared to be the consequence of the HIV-1-mediated activation of MyD88-and TRIF-mediated signals, which strongly suggests a role for endosomal TLRs; however, it remained to be understood why the same pathway could not contribute to higher cytokine levels in DC inf . It is tempting to hypothesize how the observed variability of DC-HIV interactions might influence immunity in HIV-1 infected individuals. DCs from dense cultures lacked immunostimulatory properties and in this respect resembled steady-state tissue resident DC types that promote tolerance instead of immune activation. Coincubation with HIV-1 appeared to provide weak TLR stimulation in these DCs boosting their ability to produce various inflammatory cytokines in response to a second activation signal received through TLR4. Such two-stage activation process might be essential for reprogramming cytokine expressions as the same cells secreted minute amounts of inflammatory cytokines, but very high level of IL-10, when TLR4 was activated without previous HIV-1 encounters. Thus, our results suggest that DCs developing in a tolerogenic environment might be efficient in inducing antiviral responses; however, the cells require sequential, gradually increasing activation signals, e.g., through viral compounds, inflammatory cytokines, or T cell interactions, to successfully shift toward an immunostimulatory phenotype. This hypothesis is in line with previous models suggesting powerful synergism between certain TLR ligands (31) or a need for sequential stimulation via different activation pathways to avoid functional exhaustion (32). Indeed we could demonstrate that, on the contrary to the suppressed DC type, the more inflammatory DCs rapidly developed functional exhaustion in the presence of persisting viral signals.
On the contrary to acute immune responses, DC activation might be impaired with the HIV-1 infection becoming chronic. Our data suggest that the chronic exposure to HIV-1 particles might lead to persistently perturbed activation threshold in tissue-resident DCs, potentially promoting increased responses to weak and irrelevant activation signals. Successful antiretroviral therapy may correct the HIV-1-mediated rewiring of DCs; however, other bystander events associated with chronic HIV-1 infection, e.g., microbial translocation or the dysregulation of cytokine levels, might be prevailing the suppression of virus replication and could also contribute to hypersensitivity of tissue-resident DCs.
In addition to the variable effects of HIV-1 observed in the different DC types, we have also observed that the HIV-1 strain BaL, unlike IIIB or SF162, could not influence cytokine responses in DCs suggesting a heterogeneity between virus strains in their capacity to reprogram DC activities. It remains to be clarified, potentially by testing larger repertoire of HIV-1 strains or virus combinations isolated from different individuals, whether a variability in DC modulation by the prevailing virus strains could exist between different patients or whether the capacity to control DC cytokines would represent a stable selection criteria universally maintained during the emergence of new virus variants.
In summary, we have described a robust regulation of DC cytokines at subinfectious HIV-1 levels, and our results highlighted the importance of considering DC heterogeneity for better understanding the interaction of DCs and HIV-1. Variability has also been detected between different HIV-1 strains in their ability to modulate DC cytokines, which suggests potentially existing differences in DC-HIV-1 interactions not only between DC types but also between individual patients and different stages of disease progression.
aUThOr cOnTriBUTiOns AN performed experiments, analyzed the data, and wrote the manuscript. SA and NN performed experiments and critically reviewed the manuscript. MJ and FC provided important research tools and resources and critically reviewed the manuscript. MG performed experiments and analyzed the data. BR designed the project, performed experiments, analyzed the data, and wrote the manuscript.
acKnOWleDgMenTs
We thank Anna-Lena Spetz and Arpad Lanyi for helpful discussions.
|
2017-05-03T19:05:15.942Z
|
2017-03-13T00:00:00.000
|
{
"year": 2017,
"sha1": "186f29287951868f8aa6a1ff9353918801efc2a3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.00244/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "186f29287951868f8aa6a1ff9353918801efc2a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
210942706
|
pes2o/s2orc
|
v3-fos-license
|
Slow Fluors for Highly Effective Separation of Cherenkov Light in Liquid Scintillators
The timing and spectral characteristics of four highly efficient, slow fluors are presented for liquid scintillator solutions using linear alkylbenzene (LAB) as the primary solvent. The mixtures exhibit high light yields, but with rise times of several ns or more and decay times on the order of tens of ns. Consequently, such liquid scintillator mixtures can be used for highly effective separation of Cherenkov and scintillation components based on timing in large scale liquid scintillation detectors. Such a separation, showing high light yield and directional information, is demonstrated here on a bench-top scale for electrons with energies extending below 1 MeV. This could have significant consequences for the future development of such detectors for measurements of solar neutrinos and neutrinoless double beta decay as well as providing good directional information for elastic scattering events from supernovae neutrinos and reactor anti-neutrinos, amongst others.
Introduction
The potential advantages of detecting the distinct Cherenkov and scintillation components of light signals in large liquid scintillation detectors via time separation has been highlighted by a number of authors (see for example [1], [2], [3], [4]). Combining advantages of both detection techniques, the goal would be to use the Cherenkov signal to provide directional and topological information while maintaining the good energy resolution of bright liquid scintillators. Furthermore, the ratio of Cherenkov to scintillation signals could be used to provide information for particle identification and background discrimination. Typical approaches have so far either relied on timing improvements to photodetectors [2], [5] and/or weak scintillators [6], often using a low concentration of primary fluor to decrease non-radiative transfer and reduce contamination of the Cherenkov signal [4], [3], [7]. The difficulty with the former approach is that wavelength dispersion in large detectors will broaden the prompt Cherenkov signal over several ns, limiting the extent of signal separation in typical scintillation mixtures even for ideal photodetectors. Simulation studies have so far only indicated modest signal separation for large instruments [2]. The difficulty with the latter approach is that it sacrifices light yield and, hence, energy resolution. A different method recently proposed [8] uses dichroic surfaces to spectrally separate the portion of Cherenkov light above 450nm from the scintillation signal. However, this approach would require potentially costly hardware upgrades and has a Cherenkov photon collection efficiency that is limited by the spectral range and photocathode coverage dedicated to these photons.
The work presented in this paper instead follows the suggestion of [1] to develop slow liquid scintillators with high light yields. This would offer a cost-effective approach that could be applied to existing detectors to provide excellent Cherenkov separation over the whole range of photocathode coverage. Directional information could then be used effectively down to much lower energies, which is relevant for topics such as low energy solar neutrinos as well as neutrinoless double-beta decay. While the slower scintillation signal will degrade vertex resolution to some extent, this can be tuned relative to the Cherenkov separation purity by choosing different fluor mixtures. For energies above a couple MeV, the presence of the prompt Cherenkov component in any case constrains this resolution in large-scale detectors to be somewhere between that for standard liquid scintillator (∼10cm) and pure Cherenkov (∼30cm) instruments.
Linear Alkylbenzene (LAB) will be used as the primary solvent for this study as it has been shown to be easy to handle with high intrinsic light yield, excellent optical properties and is either in use or planned to be used by a number of liquid scintillation experiments [9], [10], [11], [12], [13]. Four fluors have been selected in this study, two of which (acenaphthene and pyrene) are suitable as primary fluors in LAB, and two (9,10-diphenylanthracene (DPA) and 1,6-diphenyl-1,3,5-hexatriene (DPH)) as secondary fluors. Like all polyaromatic hydroharbons and polyenes, these fluors tend to be light-sensitive to varying degrees, so exposure to UV should be minimised to avoid degradation of scintillator optical quality. While absorption and emission characteristics of these fluors have been measured before [14], there can be significant solvent effects on emission spectra. It is also important to measure absorption spectra over a large dynamic range relevant for large scale detectors. Measurements of the relevant properties of these fluors in LAB mixtures will therefore be described in the following sections, along with light yield, timing characteristics and demonstrations of directional Cherenkov light separation for electron energies in the region below 1 MeV.
Light Yield
The relative light yield of scintillation mixtures was determined using samples in borosilicate scintillation vials irradiated by a 90 Sr source and viewed by a Hamamatsu H11432-100 (SBA) photomultiplier tube positioned ∼1cm away from the vial. Duplicate samples were used to assess systematics due to varying vial glass thickness and sample preparation. Spectral endpoints were then compared with those from reference samples of LAB (PETRELAB 500-Q from Petresa Canada Inc.) with a 2g/l concentration of 2,5-Diphenyloxazole (PPO), for which an intrinsic light yield of 11900 photons per MeV has previously been established [15], taking into account the LAB absorption and photocathode efficiency as a function of wavelength for different emission spectra.
Transmission
Light transmission measurements were made in cyclohexane using quartz cuvettes with a 1cm path length in a Perkins-Elmer Lambda 9000 transmission spectrometer. The absorbance, A, is defined as log 10 (I 0 /I), where I 0 /I is the ratio of the transmitted light intensity at a given wavelength without the sample relative to that passing through the sample. The molar extinction coefficient was then calculated from the Beer-Lambert Law as ε = A/cl, where c is the molar concentration of the fluor and l is the path-length used. Units for ε are expressed in liters/mole/cm. For the fluors considered here, a value of ε = 0.1 roughly corresponds to an absorption length of 10m at a fluor concentration of ∼1 g/l. As this length scale is of relevance for large detectors, efforts have been made to extend the measured range down to this value of ε for primary fluors. For secondary fluors, relevant concentrations for detectors are typically ∼100 times smaller, so measurements down to values of ε=10 are sufficient.
Fluorescence Spectra
Fluorescence emission spectra were measured with an Andor 303i spectrometer with a 1024x256 back-illuminated CCD and a 300 lines/mm grating, with samples excited by a 266nm UV laser.
Emission Time Spectra
The emission time profiles were determined using an arrangement based on the single photon technique described in [16] but augmented to provide a trigger that is independent of the fluor under observation. An overview diagram of this arrangement is given in Figure 1. The event trigger is provided by a 2 mm diameter Saint-Gobain BCF-12 scintillating fibre optically coupled to a Hamamatsu R9880U PMT, henceforth referred to as the trigger PMT, using index matching gel. The fibre is fed through the base-plate holding the fluor sample from below and wrapped in highly reflective aluminized foil to maximise light collection. A 90 Sr source, also 2 mm in diameter, is used to excite both the scintillating fibre and the fluor sample inside a Borosilicate glass scintillation vial, which is observed by two PMTs: a Hamamatsu R6594 PMT positioned ∼1 cm from the vial and a Hamamatsu R9880U placed ∼10-20 cm away.
The first (charge collection) PMT provides a measure of the energy deposited in the scintillator and is used for applying event level selection cuts. The second (measurement) PMT is placed at a distance such that it has less than a 10% occupancy relative to the charge collection tube. As a result, the number of events in which the measurement tube observes multiple photons is considered negligible and ignored in the analysis. Tests done at other occupancies verify the validity of this approximation. The PMTs, the base-plate that holds the fluor sample and the 90 Sr source were mounted on a Thorlabs optical posts for positional stability. All PMTs were amplified through an ORTEC FTA420 amplifier and read out at 2.5 GS/s by a LeCroy Wavepro 7200a oscilloscope. Each digitized waveform is 1 µs long with 400 ps samples. A coincidence trigger was used to acquire data, enforcing coincident observations between the trigger and measurement PMTs within a 800 ns window.
To minimize the contribution of reflected or scattered photon paths, a masking box was placed over the measurement tube. The box has a 25 mm diameter port on the front face for mounting optical components. When observing pyrene samples, a Thorlabs FEL0450 long-pass or an Edmund Optics 400 nm short-pass optical filter was mounted to select the emission components of the excimer and monomer states, respectively. In all other cases, the port housed a Thorlabs ID25 25 mm diameter iris, which was used to fine-tune the occupancy at the measurement tube where required.
In the results that follow, the orientation of the 90 Sr source, sample and measurement PMT were configured in two arrangements: In the towards configuration (Figure 1a), the source was placed on the far side of the sample so that the Cherenkov light emitted by electrons entering the fluor sample would, on average, be directed towards the measurement tube. The away configuration oriented the source on the near-side of the sample, such that the Cherenkov light emitted by the electrons was, on average, directed away from the PMT. The whole arrangement was contained within a 120 x 75 x 65 cm sealed dark box.
To mitigate fluorescence quenching by oxygen, all samples were bubbled with nitrogen for ten minutes before data taking. In order to minimize any ingress of atmospheric oxygen, the vial lid-thread was wrapped in PTFE tape before sealing.
The 90 Sr source used in these studies supports 90 Y, which undergoes β-decay with an endpoint of 2.28 MeV. After the emitted electron passes through the 2mm-thick optical fibre and 1mm-thick vial wall, this leaves a maximum energy of ∼1.2 MeV that can be deposited in the scintillator sample, though typical energies after event trigger selection are below 1 MeV. For these energies, Cherenkov light produced in the non-scintillating vial wall contributes ∼20-30% of the overall Cherenkov signal observed. This was experimentally verified by repeating the acenaphthene measurements of Figure 6c using an extra glass insert to double the effective vial thickness, comparing cases with and without an opaque layer between the insert and vial to insure the same scintillator pulse height selection. The observed fractional glass contribution to the Cherenkov light is comparable to the proportion of energy deposition that falls below the Cherenkov threshold for electrons of these energies. Therefore, the Cherenkov to scintillation ratio indicated by the measurements shown here, which include the glass contribution, is indicative of what would be seen at higher energies as one moves away from the Cherenkov threshold.
Time profile analysis
Offline analysis software was used to calculate the time differences between the digitized trigger and measurement signals. All digitized traces were filtered with a 5th order Butterworth infinite impulse response filter with cut-off frequency 500 MHz, 300 MHz and 50 MHz for the measurement, trigger and charge collection signals, respectively. To measure the separation time between trigger and measurement signals, a constant fraction discriminator was implemented in the code to calculate timestamps by linearly interpolating between the sample points that bound a given threshold. A constant fraction of 40% was used to calculate timestamps at the measurement PMT. For the trigger PMT, timestamps were taken a 5% constant fraction threshold as it was observed to provide the optimal timing resolution when measuring the impulse response function (IRF) of the system. Charge cuts were applied at both the charge collection and trigger PMTs to remove low energy 'tail' events. In all cases, the trigger tube cut was set at 200 pC. The cuts applied at the charge collection tube were tuned independently for each fluor. Any events where multiple transient signals were observed in either the signal or trigger traces were rejected from the analysis.
Measuring the system's impulse response function
The system IRF was measured by replacing the fluor sample shown in Figure 1b with a vial of distilled water. In this arrangement, electrons from the 90 Sr source propagate through the trigger fibre and into the water sample, emitting a prompt Cherenkov signal during transit. This prompt signal is modelled as a δ-function response. The IRF of the system is measured by building a histogram of the time difference between the trigger and measurement tubes over a number of events. As described above, a charge cut of 200 pC was used to reject events in the tail of the trigger PMT's charge distribution. By fitting a Gaussian to the distribution given in Figure 2a, the coincident timing resolution (σ) of the system was measured to be 390ps.
Fitting procedure
The optical response of the slow scintillating fluors is fit using an empirical model consisting of the sum of n exponential decays (n = 1 or 2) with a single, common rise time. In all cases, the fit quality was significantly improved by including a small additional component with an instantaneous rise time and a fall time (τ ) approximately equivalent to the rise time of the primary fluor. This correlation is shown in Figure 3. We believe this can be attributed to the high wavelength tail of the principle LAB solvent emission (Figure 4), which can escape absorption but is depleted by the non-radiative coupling of LAB to the primary fluor. Finally, a delta function was used to represent the Cherenkov signal. The full optical response model is given in Equation 1. This optical model was then analytically convolved with the measured system IRF and scaled to match the number of events in a relevant run. Free model parameters were then determined by minimizing the negative log likelihood. The resulting fit model is given in Equation 2 and the associated parameters are described in Table 1. For each fluor, fits were performed simultaneously over both the towards and away spectra. The parameters t 0 , N events , F Cheren were allowed to float independently for each of the two spectra. All other parameters were assumed to be common to both.
Acenaphthene
Acenaphthene (CAS 83-32-9) is a colourless, needle-like crystalline solid with a melting point of 93 o C and a chemical formula of C 12 H 10 (MW 154.212 g/mole) that comprises two fused benzene rings with an additional ethylene bridge. The acenaphthene sample used here was obtained from Tokyo Chemical Company (TCI) with >99% purity. Figure 5a shows the absorption and relative emission spectra in LAB, with figure 5b showing more details of the absorption on a logarithmic scale. When used as a primary fluor in LAB, the light yield was found to reach a maximum for a concentration near 4 g/l. This gave a light yield of 67 ± 2% that of the PPO reference, which is just under what would be expected from the respective quantum yields typically quoted (0.6 for acenaphthene [14] and ∼0.8-0.84 for PPO [21]) assuming similar coupling efficiencies. Owing to the light emission peaking near ∼335nm, acenaphthene should be used in conjunction with a secondary fluor, such as bis-MSB, to shift the emission wavelength beyond the LAB absorption region for large detectors.
The timing spectra for the forward and backward experimental configurations are given in Figure 6a. A charge selection cut was placed at 15 pC based on the measured charge distributions given in Figure 6b. The results of fits to the measured timing spectra are given in Tables 3 and 4, showing a rise time of 2.14 ± 0.19 ns and a decay time of 45.4 ± 0.3 ns. Measurements of the primary decay time component were found to be comparable with measurements of acenepthene in cyclohexane by [14]. This can be compared, for example, to the approach of [4] using dilute PPO concentrations, which achieved a scintillator formulation with a rise time of 1.16ns, a decay time of 26.7ns and a relative light yield corresponding to ∼35% of the standard mix. By contrast, the acenaphthene scintillator has nearly double the light yield with much better time separation. Note that the Cherenkov separation shown in Figures 6c and 6d is nearly perfect, with a very clear directional peak with little contamination by scintillation light at early times.
Pyrene
Pyrene (CAS 129-00-0) is a pale yellow, crystalline solid with a melting point of 150 o C and a chemical formula of C 16 H 10 (MW 202.25 g/mole) that comprises 4 fused benzene rings. The pyrene sample used here was obtained from Sigma Aldrich (Merck) with >99 % purity. Figure 7a shows the absorption and relative emission spectra in LAB, with figure 7b showing more details of the absorption on a logarithmic scale. Pyrene exhibits higher wavelength excimer emission, peaking around 480nm, which becomes more prominent at higher concentrations and is sensitive to the solvent used. The emission shape shown in figure 7a is for a concentration of 1 g/l in PPO. The leftmost monomer peak, centered around ∼390nm, becomes negligible for concentrations of several g/l or more.
When used as a primary fluor in LAB, the light yield was found to reach a maximum for concentrations beyond 2 g/l, maintaining an approximately constant level for concentrations up to at least 10g/l. This is a consequence of the high-yield excimer emission at higher wavelengths that largely avoids self-absorption. The peak light yield at a concentration of 1 g/l corresponds to 76 ± 2 % that of the PPO reference. This is consistent with the pyrene quantum yield of ∼0.65 [17] compared to ∼0.8 for PPO , particularly considering that the coupling efficiency might be slightly lower compared with the higher concentration PPO reference. For pyrene concentrations in excess of several g/l, this light yield rises to 99 ± 6 %, which would be consistent with a higher coupling efficiency. It should be noted that, as a consequence of the excimer emission occurring at higher wavelengths that are away from the peak of bialkalai photocathode efficiencies, the observed light levels tend to be ∼30% lower for typical large-format PMTs. On the other hand, absorption in LAB is reduced at these wavelengths, which may compensate for light levels to some extent in large scale detectors.
The timing spectra for the forward and backward experimental configurations of the monomer state, selected with a 400 nm short pass optical filter, at a concentration of 1 g/l are shown in Figure 8. Spectra for the excimer state, selected with a 450 nm long pass optical filter, at concentrations of 1 g/l and 8 g/l are shown in Figures 9 and 10, respectively. A charge selection cut was placed at 15 pC based on the associated charge distributions. The results of fits to the measured timing spectra are given in Tables 3 and 4, showing rise times ranging from ∼4.5-60 ns and decay times ranging from ∼50-100 ns, depending on concentratons used. The Cherenkov separation is even more distinct than for acenaphthene, with a very clear directional peak with little contamination by scintillation light at early times.
For the concentrations used here, the A component fit to a very small values, often consistent with zero. We believe this is consistent with the higher molar extinction at longer wavelength for pyrene compared with acenaphthene or PPO, which then absorbs more of the higher wavelength residual LAB emission. Measurements of the primary fall time components of both the monomer and excimer states, shown in Figure 11, were found to be comparable with measurements of pyrene in cyclohexane as measured by [18], though there are clear differences in the measured rise times, which may be indicative of solvent effects. DPA (CAS 1499-10-1) is a yellow, crystalline solid with a melting point of 250 o C and a chemical formula of C 26 H 18 (MW 330.42 g/mole) that comprises three fused benzene rings with two additional linked rings from the centre of the chain. The DPA sample used here was obtained from Tokyo Chemical Company (TCI) with >98 % purity. Figure 12a shows the absorption and relative emission spectra in LAB, with figure 12b showing more details of the absorption on a logarithmic scale. When used as a secondary fluor in conjunction with 2 g/l PPO in LAB, the light yield of the mixture was found to reach 93±1 % that of the PPO reference alone, which is consistent with the high quantum yield of DPA [19]. For measurements here, a concentration of 0.3 g/l was used so as to insure nearly complete absorption of the PPO emission spectrum within the vial while still maintaining a dominantly radiative transfer to DPA.
It is interesting to note two distinct components of absorption and emission, with the lower wavelength absorption in the 250 nm region roughly 50 times stronger than that around 375 nm and corresponding emission near 340 nm overlapping the secondary absorption region. These features were missing the measurements by Berlman [14], likely because measurements were not extended low enough in wavelength. It can be dissolved in LAB a concentrations as high as 5 g/l at room temperature and, in principle, could then also be used as a primary fluor, although it is several times more expensive than PPO and would suffer from notable absorption below ∼450 nm in large scale detectors, with a large proportion of light then shifted to less efficient detection ranges for bialkalai PMTs.
The timing spectra for the forward and backward experimental configurations using DPA as a primary and secondary fluor are given in Figures 13 and 14, respectively. A charge selection cut was placed at 50 pC based on the measured charge distributions given in Figures 13b and 14b. The results of fits to the measured timing spectra are given in Tables 3 and 4, showing a rise time of ∼3.2 ns and a primary decay time of ∼12 ns. The extent of Cherenkov separation is, in fact, similar for both DPA concentrations used: the main difference between Figures 13c and 14c is that the scintillation signal in the former is ∼20 % higher owing to the better transfer efficiency of the higher concentration fluor and its greater quantum efficiency compared to the PPO-DPA combination. The Cherenkov separation is still good, but the contamination from scintillation light is greater than for acenapthene or pyrene owing to the faster decay time. This contamination will increase in large detectors owing to dispersion effects. However, this also allows for improved vertex reconstruction and less susceptibility to fluorescence quenching in loaded scintillator mixtures (which tends to increase with fluor lifetime). Any fluorescence quenching that is present will tend to improve the visibility of the Cherenkov signal again, so this may be the better choice of fluor for certain physics applications. Tables 3 and 4. 7. 1,6-Diphenyl-1,3,5-hexatriene (DPH) DPH (CAS 17329-15-6) is a yellow, crystalline solid with a melting point of 200 o C and a chemical formula of C 18 H 16 (MW 232.326 g/mole) that comprises two benzene rings connected by a hexatriene chain. The DPH sample used here was obtained from Tokyo Chemical Company (TCI) with >95 % purity. Figure 15a shows the absorption and relative emission spectra in LAB, with figure 15b showing more details of the absorption on a logarithmic scale.
When used as a secondary fluor in conjunction with 2 g/l PPO in LAB, the light yield of the mixture was found to reach 99 ± 6 % that of the PPO reference alone, which is a little high but consistent within the errors of that expected for the quantum yield range typically quoted for DPH [20] (it should also be noted that literature values for quantum yields tend to vary by roughly ∼10 % for the fluors considered here). A concentration of 0.1 g/l was used so as to insure nearly complete absorption of the PPO emission spectrum within the vial while still maintaining radiative transfer to DPH. As a significant portion of the emission spectrum lies above 450 nm, where bialkalai photocathodes begin to become less efficient, observed light levels in typical large format PMTs will tend to be ∼25 % lower relative to PPO. DPH is roughly 5 times more expensive than DPA, but much less is needed as a secondary fluor owing to the significantly higher molar absorption coefficient.
The timing spectra for the forward and backward experimental configurations using DPH as a secondary fluor are given in Figure 16. A charge selection cut was placed at 50 pC based on the measured charge distributions. The results of fits to the measured timing spectra are given in Tables 3 and 4
Conclusions
The properties of 4 slow fluors have been studied in the context of LAB-based liquid scintillator mixtures to provide a means to effectively separate Cherenkov light in time from the scintillation signal with high efficiency. This allows for directional and particle ID information while also maintaining good energy resolution. Such an approach is highly economical and can be readily applied to existing and planned large-scale liquid scintillator instruments. Using this technique, this paper explicitly demonstrates Cherenkov separation on a benchtop scale, showing clear directionality, for electron energies extending below 1 MeV. This has important consequences for a variety of future instruments, including measurements of low energy solar neutrinos and searches for neutrinoless double beta decay in loaded scintillator detectors. This also opens the possibility of obtaining good directional information for elastic scattering events from supernovae neutrinos and reactor anti-neutrinos in large scale liquid scintillation detectors. While the use of slow fluors means that the vertex resolution may be worse than typical large scale liquid scintillator detectors (but better than typical large scale Cherenkov detectors), the balance between position resolution, Cherenkov separation purity and energy resolution can be tuned for a particular physics objective by modifying the fluor mixture. This balance is also affected by the presence of fluorescence quenchers, which may be naturally present in the case of loaded scintillator mixtures or could be purposely introduced.
|
2020-01-30T02:01:18.576Z
|
2020-01-24T00:00:00.000
|
{
"year": 2020,
"sha1": "1d4c386a194444fc29b27b14f430ace3b12c5ecf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.10825",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a46bd49356dde9ebd7ee37b9573f0a8c68e96a04",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
231925322
|
pes2o/s2orc
|
v3-fos-license
|
Self-organization and shape change by active polarization in nematic droplets
Active forces occurring within cells can drive crucial biological processes that involve spontaneous organization and shape change, such as cell division. Motivated by recent in vitro experiments of nematic droplets of cytoskeletal filaments and motors that self-organize and divide, we present a minimal hydrodynamic model that combines the nonequilibrium kinetics of motor-filament interactions with equilibrium nematic phase separation. The motors organize within droplets and structure filaments into polarized aster defects. At large motor activity, they can even deform or divide the droplet, or form multi-aster chains of droplets. Our predicted phase diagram recapitulates these experimentally observed shapes.
Introduction. Active mechanical forces enable living systems, particularly animal cells, to move, change shape, organize components, and divide. Subcellular cytoskeletal assemblies, comprising polar filaments and molecular motors that transduce biochemical reactions to generate active mechanical forces, drive these processes [1]. Understanding the general physical principles of living matter provides insight into cell biology, as well as guides the engineering of artificial cells that exhibit spatiotemporal organization of components and spontaneous shape change characteristic of cell division [2]. Model in vitro systems of purified cytoskeletal proteins, which capture elements of cell biological phenomena with only a fraction of the biochemical complexity occurring in vivo, exhibit a rich array of collective phenomena [3,4] that motivates bio-inspired active matter theory [5].
Recently, phase segregated macromolecular droplets have emerged as model systems to investigate spatiotemporal organization in biological cells and phase separation has been proposed as a possible primitive means of subcellular organization in protocells [6]. These macromolecular liquids are typically composed of disordered proteins and nucleic acids, and consequently, internal droplet order as well as motor activity are absent. In contrast, recent experiments indicate that biopolymers with high aspect ratio, such as cytoskeletal filaments, can form phase separated droplets with orientational order, as in liquid crystals, because of the alignment of filaments in the dense phase [7]. This nematic order confers an equilibrium spindle shape to these droplets [8,9], known as tactoids, which arises from a competition of droplet surface tension, the tendency of the filaments to align with the interface, and elasticity arising from bulk nematic order [10,11].
The ordered structure in biomolecular fluids influences the emergence of collective phenomena in active systems [5]. When confined to a droplet, active forces lead to non-equilibrium phenomena such as droplet shape change [12][13][14][15][16][17][18][19], motility [20,21] and dynamics governed by the geometry of the confining droplet [22]. In addition to these active nematic fluids, the directed "walking" of motors on filaments, along with motor-based filament crosslinking, can lead to polar order, where filaments prefer to point in the same direction at the mesoscopic scale, and form self-organized defect structures, such as asters and vortices [23]. Such polar active states are fruitfully described by hydrodynamic theories [24][25][26][27][28][29][30]. In recent experiments, myosin motors were shown to self-organize at the midplane of the aforementioned actin-filament based nematic droplets [31]. When sufficiently active myosin motors are present, they deform the droplet, even splitting it into two. A simple free energybased model of a nematic droplet, considering only the mutual alignment of the filaments and motors and adhesion of the droplet to the motor complex, was invoked to capture these key behaviors, but relied on arguments specific to the shape and structure of both the motor complexes and the droplet [8]. On the other hand, active mechanical forces, specifically the directed sliding of filaments by motors leading to their sorting by polarity [1], are expected to play a role in the dynamics of the organization. The effects of such active emergence of local polar order and associated defects, within a nematic droplet with a preferred orientational axis, have not yet been theoretically explored.
In this Letter, we combine continuum modeling that describes the structure of equilibrium nematic droplets with an active mechanical model for how motors move and slide filaments according to polarity, while selforganizing into localized asters. Using numerical simulations, complemented by theoretical analysis, we show that the resulting motor-filament self-organization destabilize droplets, giving rise to a rich array of experimentally observed structures including deformed, divided and multi-lobed droplets that can be generated by tuning one of two motor activity-dependent parameters.
1D polarity sorting model. To build up intuition on the role of active forces in motor self-organization, we first 2 examine a simplified one dimensional setup of motors interacting with filaments, as sketched in Fig. 1(i). Here, red lines depict actin filaments, blue circles show myosin II motors, while the black arrows indicate the motors respective direction of motion. This scenario arises, for instance, in contractile actin bundles [32,33] and when actin filaments are locally oriented along a nematic director [34]. Since motors walk towards a specific end of the polar filament (the barbed or plus end), we consider a density of left and right pointing filaments, denoted by n + and n − respectively. Momentum balance for the motor-filament system requires that a right (left) pointing filament is pushed to the right (left) by the motor as the motor moves in a single direction along the polar filament. The active motion of motors of density m thus gives rise to mass fluxes of both motors and filaments. In the dilute regime, the fluxes of right and left pointing filaments can be expressed as J + = ζn + m and J − = −ζn − m respectively, where ζ is a parameter related to the active motor force. The net flux of motors is written as −v 0 (n + −n − )m, where v 0 is the self-propulsion velocity of motors.
Including the diffusion of filaments and motors with coefficients D and D m respectively, the flux conservation equations can be written in terms of filament density ρ = n + + n − and the 1D polarization p = (n + − n − )/ρ 0 of filaments, where ρ 0 = ρ is the average density, giving FIG. 1. Illustration of steps in the active polarity-sorting model. (i) Motors "walk" towards barbed ends of filaments which they slide in the opposite direction as a result of momentum conservation. (ii) Polarization and (iii) motor density at steady state as a function of position in a one-dimensional model (Eqs. S7-S6) that shows motor localization to center of bundle by polarity sorting. (iv) In the 2D nematic droplet model, the filaments are initially oriented along the long axis without any polarity preference. Motors bind to filaments and advect them according to their polarity. (v) At steady state, motors gather at the droplet midplane and sort filaments into an aster that pinches the droplet.
The steady-state solution of Eqs. (2)-(3), (see SI I-II), is the one dimensional equivalent of an aster, where the populations of right and left pointing filaments are completely sorted to the right and the left of the aster, and is shown in Fig. 1(ii)-(iii). Note that in this minimal model, we have not considered the usual role of myosin motors as active crosslinkers which create pairwise forces on anti-parallel filaments, leading to self-straining flows proportional to local polarization [35]. This simple model reproduces the key experimental observation that motors migrate to the center of the droplet, and predicts that they induce strong polarity sorting. 2D nematic droplet with motor-induced polarization. To explore this prediction of active polarity sorting in nematic droplets and its implications for droplet shape, we build a more realistic hydrodynamic description for a suspension of filaments and motors on a frictional substrate that damps out large-scale fluid flows. In contrast with a thin fluid film, the filaments we seek to describe aggregate into droplets with free interfaces that separate the high density nematic from the low density isotropic phases, depicted in Fig. 1(iv). This is conveniently described by a nondimensional "phase field" corresponding to filament density, ψ, where ψ > 0 ( ψ < 0) describes the interior (exterior) of the droplet. Filaments in the high density droplet interior align in orientation, described by the 2D nematic order parameter, Q ij . These two ingredients result in nematic droplets with tactoid shape at equilibrium [36]. The active motion of filaments and polarity sorting induced by motors results in a net polarization within the droplet. Unlike in the 1D model, polar order in 2D is non-conserved and can be induced by motor-driven torques or relaxed by rotational diffusion. Generalizing Eq. (1) -(3) to 2D, and observing usual principles of conservation and symmetry [5], we obtain the dynamical equations, Eq. (4)-(5) describe the conservation of motors and filaments, respectively, and include both active and passive fluxes on the right side. The motor flux includes diffusion in the first term, and active motion of motors with a propulsion velocity v 0 in the second, Additionally, we include binding-unbinding kinetics for the motors (third and fourth term in Eq. (4)), where motors bind with rate k on wherever filaments exist (expressed by θ(ψ), the Heaviside step function) from a "reservoir" of free motors in solution, and unbind with rate k off , but are not restricted to diffuse within the droplet. Eq. (5) includes a flux created by a free energy of inter-filament interactions (full form given below) as well as active flux induced by motors advecting filaments in their polarity direction. Eq. (6) describes the induction of polarization by torques caused by gradients in motor and filament density, and its relaxation through rotational diffusion. Seen in the framework of the Toner-Tu hydrodynamic theory that describes the flocking of polar active particles [37], where p has the status of both an orientational order parameter as well as a fluid velocity, the −ζ 0 ∇ψ gives the gradient of pressure, and −ζ p ∇m is the gradient of an active stress created by motors [38]. Terms similar to this latter also arise in the theory of chemotactic colloids [39] and equilibrium polar liquid crystals [40],in both of which cases a chemical concentration can guide polarization. . Microscopically, the ζ p term captures the preferred orientation of filaments towards regions with higher motor density by the binding and crosslinking by motors, while ζ originates from the sliding forces exerted by motors. Note that the ζ specifies both polarization and active flux in the one dimensional model, whereas the ζ p term arises in the two dimensional model because of the additional orientational degree of freedom. The parameters v 0 , ζ and ζ p then all depend on motor activity but can in principle be varied independently by tuning motor properties such as size, shape, processivity and crosslinking. We assume in Eq. (7) that the nematic order is strong and arises from equilibrium forces. The timescales for relaxation towards equilibrium are specified by the "mobility" coefficients, Γ ψ , Γ p and Γ Q .
The equilibrium dynamics assume a coupled phase transition in ψ and nematic order Q, and a relaxation of p included in the total free energy, The density free energy Eq. (8b) models phase separating droplets according to standard Cahn-Hilliard dynamics. Ignoring corrections for curved interfaces [41], we use droplet surface tension ("line tension" in 2D) and its interfacial width, ξ = 2B/ν 2 [42]. The free energy for the polarization, Eq. (8c), includes two relaxation terms, α p , β p > 0 (corresponding to lack of spontaneous polar order) and an elastic term, κ p . Eq. (8d) is the Landau-de Gennes free energy for the nematic order, Q ij , with elasticity κ Q . All equilibrium couplings between fields are written in Eq. (8e), where the first term controls the density-driven isotropic-nematic transition and induces nematic order within the droplet. The second term is a "weak anchoring" that aligns the nematic parallel to the droplet interface for A > 0. The third term aligns the polarization with the nematic or- der. Overall, this model recapitulates the key elements of nematic phase separation of the filaments and their coupling with motor activity. Results. We now employ numerical simulations to explore the consequences of motor activity on the dynamics and morphology of phase-separating nematic droplets. Starting from an initially circular droplet of radius R 0 , we integrate Eqs. 4-7 until a non-equilibrium steady-state is reached (see SI III for simulation details and parameters). Figure 2 presents typical simulation results. We obtain an elongated droplet with the nematic aligned along its long axis, as shown by the density and nematic director plots for a typical case in Fig. 2(i). The motor density accumulates at the core of the aster it induces with an outward polarization as shown in Fig. 2(ii). We first explore the interplay between motor-generated active forces, which tend to distort the equilibrium structure, and surface tension which resists such deformation. To this aim, we perform simulations with varying mo- surface tension ∞ 20 40 60 ≥ p theoretical analysis single tactoid, one aster two connecting tactoids, one aster three connecting tactoids, two asters full division (iv) three tactoids tor induced polarization, ζ p , relative to surface tension, γ. The resulting time sequences of observed droplet shapes, starting from an unpolarized droplet, are shown in Fig. 2(iii). When ζ p is low compared to surface tension, Fig. 2(iii a), motors localize towards the center to form an aster, which only slightly polarizes and deforms the droplet. At intermediate ζ p (iii b), motors localize more strongly, resulting in a stronger elongation and the appearance of a constriction of the droplet around the midplane between strongly polarized lobes. Increasing the ζ p further can lead to two distinct scenarios: In (iii c), the aster divides and a third, central lobe with strong polarity gradient emerges between two constrictions. In (iii d), finally, ζ p is sufficiently strong to induce full division of the initial droplet into two polarized daughter droplets. The short-time dynamics of this model thus reproduces the motor centering, aster formation and polarity sorting features of the 1D model (Fig. 1), while at longer times a complex diversity of morphologies emerges from the interplay between surface tension and motor activity.
To rationalize this rich phenomenology, we build a morphological phase diagram in Fig. 3(i) in ζ p -γ while keeping other parameters constant. Each distinct steady state structure is shown in Fig. 3(ii)-(v) (top). Interestingly, each of these morphologies corresponds to experimentally observed shapes, as shown in Fig. 3(ii)-(v) (bottom). Confirming the qualitative findings described in Fig. 2(iii), at medium to high surface tension and low ζ p , we find motors form asters in the midplane of the undeformed droplet (blue stars), whereas higher ζ p increases the influence of the centered aster on droplet shape, pinching it into two lobes (orange circles). This is consistent with the experimental observation that motors always localize to droplet midplane, but only deform the droplet when there are more active motors [31]. Qualitatively, the motors at the aster core splay the filaments, which are anchored to the interface, and which therefore results in an inward pinching of the interface. The motor induced polarization (second term in Eq. 6) is derivable from a functional, −(ζ p /Γ p ) d 2 x m∂ i p i , corresponding to an effective free energy for motor-induced spontaneous splay. This corresponds to a negative contribution to surface energy [43] (see SI IV for the derivation), that drives the droplet shape instability when ζ p m 0 /Γ p > γ . We thus expect the transition between undeformed and pinched droplets to occur across a dividing line, ζ p ∼ γ, which is confirmed by our phase diagram data in Fig. 3(i).
By further increasing ζ p and lowering surface tension, we find droplets with two asters which pinch them into three-lobed structures (Fig. 3(iv), green upward triangles). Importantly, we also find experimental realizations of such multiply pinched, equi-lobed structures, shown in Fig. 3(iv) bottom panel. Such linear "strings of tactoids" connected by multiple motor clusters have not been previously reported and are evocative of fibers with periodic contractile units such as in muscle or anomalous, multipolar biological spindles [44].
Multi-aster states such as aster lattices are in fact a generic feature of bulk active polar fluids [26,30], but here, we report and analyze their occurrence within droplets. To explore the aster-forming instability arising from the feedback between motor flux and motor induced polarization in Eq. 4 and Eq. 6, we perform a linear stability analysis for an incompressible, polar fluid featuring only p and m (see SI V). The analysis yields a characteristic spacing between asters in bulk, l c , such that l c ∼ 1/ m 0 ζ p , that decreases with motor density and motor induced polarization. Based on this, we expect multiple asters can be accommodated within the droplet when its major axis, l d , is long enough compared to l c . We confirm this numerically in (Fig. 3(vi)) by showing that the critical motor concentration for transition from one to two asters for varying droplet size (but all other parameters constant), does indeed decrease with initial droplet size, with an expected scaling of m 0 ∼ 1/l 2 d . Since the aspect ratio of a droplet at equilibrium increases with decreasing surface tension, the droplets get longer and can accommodate multiple asters when l d > l c ∼ ζ −1/2 p is satisfied. This explains why lower surface tension favors the two-aster state (green triangles) in Fig. 3(i). At even higher ζ p and lower γ, we find a region of the phase diagram in Fig. 3(i) where the droplet is fully divided (purple diamonds).
To see how droplet division can be enhanced, we now explore an alternative shape instability in our model Eq. 5, whereby an active density flux pushes filaments away from the aster. By varying the strength of this active flux, ζ, and the surface tension γ, while keeping ζ p (= 5.0) fixed, we obtain the phase diagram in Fig. 3(vii), which shows undeformed single tactoids (blue stars), pinched droplets (orange circles) and fully divided droplets (purple diamonds). Here, we find that droplet division occurs over a wide range in parameter space, showing that ζ is a good control parameter to trigger droplet division, while ζ p can be used to obtain multiasters states.
To analyze how the active flux parameter, ζ, for full division scales with γ, we note that if the motor and polarization are "fast variables" that relax quickly to their steady state aster solution (shown in the SI VI), the active density flux term in Eq. (5), −ζ∇ · (mp) can be obtained from the variation of an effective "free energy" functional, −ζ d 2 x ωψ . Here, ω is a scalar potential that gives the steady state polarization corresponding to an aster through mp = −∇ω. It can be shown that this effective free energy term makes a negative contribution to the droplet surface energy (detailed in the SI VI) that scales with ζ and which can then destabilize the droplet when sufficiently strong compared to surface tension. We thus expect the transition to fully divided droplets to scale as ζ ∼ γ, which is confirmed by our simulations in Fig. 3(vii).
Conclusion. We build and explore a minimal theoretical model for motor-filament droplets which captures four different shapes and show them experimentally in the actomyosin droplet system. We predict a phase diagram of expected droplet shapes based on motor activity and filament interactions, which can be tested in future experiments by systematically varying these properties. Unlike other theoretically proposed active droplet division mechanisms, the droplet shape changes we predict and show in experiment, do not require fluxes of chemical reactants [13] or large-scale fluid flows and defect dynamics [12], that arise in active nematic morphodynamics [45]. Instead, the emergence of local polar order within a nematic and the consequent spatial localization of activity, is crucial in our model, which was not explored in a previous equilibrium nematic model for this system [31]. In addition to in vitro cytoskeletal spindles, our results also have implications for the physical principles behind cell division and the organization of the biological spindle, where motor-induced active organization of filaments [9], shape change [46], co-existence of polar order with nematic order [47] and even formation of multi-lobed structures [48] can occur. We expect our work to inform strategies to use localization of active agents to achieve self-actuated shape morphing in materials [49] and synthetic cells.
I. DERIVATION AND SOLUTION OF ONE DIMENSIONAL MODEL
The 1D model for the dynamics of filament density, filament polarization and motor density given in the main text follow from the conservation equations: Our one dimensional model (Eqs. (1)- (3)) is simplified by assuming incompressibility, i.e. the density is ρ = ρ 0 . In the steady state Eqs. (1)-(3) then reduce to The Eqs. (S4)-(S5) are solved with "natural boundary"conditions that the motor density and polarization as well as their corresponding fluxes decay to zero at x = ±∞ to yield: where ξ = 2Dm v0 is the typical aster length scale and m 0 = Dξ ζρ0 is the aster strength. Figure 1(ii-iii) shows the solutions Eqs. (S6)-(S7), which show the one dimensional equivalent of an out pointing aster. The motors gather at the center of the aster as expected. Note that in 1D, the flux at steady state is a constant throughout the system, and that natural boundary conditions require this flux to vanish. In other words the diffusive flux of the motors and filaments is balanced by the corresponding active flux that advects them, at steady state. This competition sets the width of the distribution of motors at the "aster": ξ ∼ Dm v0 . In a confined droplet geometry, and at higher dimensions, solutions with nonzero fluxes are possible and seen in our numeric solutions.
II. 1D NUMERICS
To test the minimal 1D model in a finite domain, we numerically integrated the following equations Here, we included the following free energy which has a phase field for the density ρ such that a one dimensional droplet is formed and F 1d has decaying terms for the polarization p. We integrate Eq.(S8)-(S10) numerically using a a finite difference method with a forth order Runge-Kutta time discretization and periodic boundary conditions. The simulation parameters can be found in table I. Figure S2 show the motor concentration ( Fig. S2(a)), polarization ( Fig. S2(b)) and density (Fig. S2(c)) for a simulation at low activity ζ = 1.5. We find the emergence of a single droplet with a centering aster.
Going to higher activity ζ = 3.5, we find that the droplet can divide by means of the active motion of filaments. Figure S1 shows a time series of the division of a droplet. Here, the blue solid line show the motor concentration, the dashed orange line the polarization, and the dashed dotted green line the density.
III. SIMULATION METHOD
We solve Eq. (4)-(7) numerically using a finite difference scheme with a fourth order Runge-Kutta time integration on a 74 × 74 grid. We use a timestep of δt = 0.01 and the various parameters are given in Table III, while we vary surface tension, γ, and the strength of motor-induced density flux and molecular torque: ζ and ζ p . We initialize the system with a circular droplet of radius R 0 .
The parameters used in the simulations in the main text are given in Table II and Table III .
IV. ASTER CONTRIBUTION TO SURFACE ENERGY
The formation of an aster is actively induced by the motor gradient term in Eq. 6, which can be shown to arise from an effective free energy contribution, ζ p /Γ p d 2 x p·∇m, to the polarization equation. Integrating by parts and setting the surface term to zero, this term can be seen to be equivalent to a spontaneous splay energy, −ζ p /Γ p d 2 x m∇ · p.
For a uniform motor concentration, m(x) = m 0 , this spontaneous splay term can be transformed into a surface term using the Green's theorem (which is also the 2D version of the divergence theorem) to give, −ζ p m 0 /Γ p dln · p, where the line element dl is along the boundary of the droplet andn is the unit normal to the droplet boundary. Since the p points outwards in the asters, we getn · p > 0, which therefore contributes a negative line tension to the droplet interfacial energy. Comparing with the droplet line tension energy, γ dl, and assuming that polarization has a value p 0 at the droplet boundary, we arrive at the condition, ζ p p 0 m 0 > Γ p γ for destabilization of droplet shape by Fig.3(i) Fig.3(i) and Fig.3(vii)) and to compute the critical motor concentration for three aster splitting as function of initial droplet size (Fig.3(vi)).
the spontaneous splay induced by motor activity.
V. ASTER FORMATION IN BULK ACTIVE POLAR MODEL
The gain more insight into the formation of multiple asters we study a simplified model for polar active fluids in bulk. We consider the following equations for motor concentration and polarization field, ∂ t p = −ζ p ∇m + Γ p (κ p ∆p + α p p − β p p|p| 2 ). (S14) Here, we use ∆ ≡ ∇ 2 is the Laplace-Beltrami operator In this simplified model, we consider a spontaneous long range order in the polarization instead of nematic order. We also neglect motor binding kinetics, and instead set the total number of motors to a constant average value, m 0 , which corresponds to the steady state of motor binding kinetics: m 0 = k on ψ 0 /k of f . Further, we suppress the density kinetics by going to the incompressible limit and setting the compressibility ζ 0 = 0. Equations (S13)-(S14) are integrated using a finite difference method with periodic boundary condition in a square box with side length l. For the time integration, we use a fourth order Runge-Kutta method with a timestep, δt = 0.01. We initialize the system with an aster in the polarization field p = (cosφ, sinφ) T and a uniform motor concentration, m 0 . All simulation parameters are given in Table IV.
First, we study the system size dependence by varying the box size l, and initial motor concentration m 0 = 0.5. In Fig. S3 the number of asters as a function of system size l is shown. The single aster state from the initial condition is destabilized above a critical system size l c = 54 and we find a monotonic increase in number of asters. Linearzing Eq.(S13)-(S14) around the polarized state p = e y + δp, m = m 0 + δm and transforming into Fourier space with wavevector k = (k x , k y ) T gives ∂ t δm = −D m |k| 2 δm + iv 0 m 0 k · δp + iv 0 ik y δm, (S15) where we neglected the Landau terms. This system has a maximal eigenvalue whose wavevector is given by The corresponding wavelength L max = 2π/k max determines the grid spacing of asters in our simulation. Hence, we expect to find more than one aster when the system is larger then 2L max = 52.2 which agrees reasonably well with the system size we find in our simulations (see Fig. S3).
Second, we investigate the number of motors in the system while keeping the system size constant at l = 32. We initialized the system with an aster in the polarization field, and varied the initial motor concentration m 0 . Figure S4 shows the number of asters as a function of motor concentration. We find a critical motor concentration m 0 = 1.5 above which there are multiple asters emerging. Using the maximal wavevector determined Eq.(S17) by our linear stability analysis we can compute the motor concentration at which the corresponding wavelength L max is half the system size, such that it can support two asters. The resulting critical motor concentration, has a value of m 0 = 1.3, which agrees reasonably well with the critical motor concentration we find from our simulations.
In Fig S5, we show the critical motor concentration at which one aster becomes unstable as a function of systems size. We show critical concentrations from both our simulations and our linear stability analysis, which agree well. The critical motor concentration becomes smaller with larger system size.
VI. STEADY STATE ASTER SOLUTION
In order to find a steady state solution corresponding to an aster, we focus on the motor and polarization equations, which at steady state are given by, 0 = −ζ p ∇m + Γ p κ p ∆p, (S19) 0 = ∇ · [D m ∇m + v 0 (mp)]. (S20) Here, we neglected binding kinetics in the motor equation as well as the decaying terms for the polarization p and set compressibility ζ 0 → 0. A solution to Equation (S20) can be obtained by satisfying, ∇ln(m) = −v 0 /D m p. We insert this into the Laplacian of Eq.(S19) and find a solution, 0 = λ m m + ∆ln(m), where λ m = ζpv0 ΓpκpDm . This is known as the Liouville-Bratu-Gelfand equation [50][51][52] and has been solved for a number a physically relevant cases [53]. A single aster solution is given by
|
2021-02-16T02:16:30.021Z
|
2021-02-15T00:00:00.000
|
{
"year": 2021,
"sha1": "7bac0a3f48ecbbae6bef63faab8ea7848238c0d6",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.043061",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "7bac0a3f48ecbbae6bef63faab8ea7848238c0d6",
"s2fieldsofstudy": [
"Physics",
"Biology"
],
"extfieldsofstudy": [
"Physics"
]
}
|
227342203
|
pes2o/s2orc
|
v3-fos-license
|
Sales proportion of beef and goat parts at traditional markets in medan city
This study aims to determine the sales proportion of beef and goat based on their parts, the balance of their supply and demand in traditional market in Medan City. This research was conducted for one month from August to September 2017. The research was conducted in five traditional markets located in Medan City namely Olimpia, Petisah, Simpang Limun, Sei Kambing and Brayan. The population in this study were beef and goat traders consisting of 41 beef traders, 6 goat meat traders and 2 traders of both types of meats in five traditional markets e the number of samples 33 people. The type of data used is quantitative data which includes the sales amount of beef and goat meats. The results showed that beef sales proportion based on the sold parts in traditional market in Medan City, the most sold is the leg meat as much as 166.568,9 kg (60,24%) then the lowest is the lower leg about 3.126 kg (1.13 %). For the sales proportion of goat meats in the traditional markets in Medan City, the largest part is the meat 9.230,3 kg (88.6%) from the total sales of goat meat and the least is the lower leg as much as 427, 2 kg (4.1%). The result of this study indicated that there a balance between supply and demand of beef and goat in these traditional markets in Medan City.
Introduction
Livestock is a very important part of agricultural sector, and to Indonesia's economy overall, and plays a vital role in many different aspects of national and daily life, including in providing national food security, as nutrition, in generating income and savings, and in many social and cultural functions. Meats from livestock play a significant role in the livelihood of many people living in smallholder farms in rural areas, particularly regarding their family income, nutrition and welfare [1]. Livestock is part of the agricultural sub-sector that has a great opportunity to develop and play a very important role in the provision of food needs, especially animal protein. The nutritional need of people for livestock products will increase every year as population growth and as society's awareness of the importance for nutrition is useful to improve the quality of their life [2].
The demand for meat in Indonesia is growing as a result of urbanization, the growing population, and the rapid growth of the urban middle class economy and its tendency to spend money on food [3]. According to Yulianto and Saparinto [4], beef cattle is one of the main meat-producing livestock in Indonesia. However, domestic beef production has not been able to meet the nutritional needs of people due to higher population and lower livestock products. A low population of beef cattle tend to cause most of the animals kept by small-scale farmers with limited land and capital. The fulfillment of animal protein needs is closely related to the domestic meat supply. Currently, domestic demand for meat has not been matched by adequate supply. Sudaryanto and Erizal [5] reported that efforts to increase people's food security especially related to livestock products are seen from their ability to provide livestock products. It should also be noted how far the effort developed is able to increase people's purchasing power. Medan City is the capital of North Sumatera Province whose people consume beef and goat. The population of Medan City increased from year to year with the number reaching 2,210,624 people in 2017.
While the production of beef in Medan City fluctuated tend to increase from year to year. But this increase in production has not been able to offset the population growth rate of Medan City so that there is an imbalance that causes prices to continue rising in the market. Goat meat production in 2017 amounted to 3.546.08 tons for the region of North Sumatra with a total consumption of 3,484.44 tons. Production and consumption tends to decline compared to previous year. Medan City has a high potential of traditional market. Traditional markets deserve to be accounted for community development. The market is a place to transact the sale and purchase of various types of community needs, whether primary, tertiary or secondary. Traditional markets provide enough commodities and sold in groups. The purpose of this study was to investigate sales proportion of beef and goat parts (tenderloin, sirloin, leg, gravel, lower leg, chop, and ribs) at traditional markets in Medan City. The support of governance toward the distribution will make the positive idea to increase the economic. Like the support of transportation [6], or building the airport to distribute the materials for economic growth [7].
Method
This research was conducted in five traditional markets in Medan City. The location of study is determined purposively based of the number of traditional market in Medan City than modern markets. The selected traditional markets are Central Market in sub-district Medan Kota, Petisah Market in subdistrict Medan Petisah, Simpang Limun Market in sub-district Medan Amplas, Brayan Market in Brayan and Sei Kambing. These traditional markets were able to represent all central, east, west, south and north of Medan City. The markets selection was also based on the largest number of traders from the area where all traders in these location will serve as a source of meat sales data during the survey period. This research was conducted from August to September 2017.
This study uses primary and secondary data that are qualitative and quantitative. Primary data is data obtained from direct interviews with meat traders or retailers that include sales volume and specific section by using questionnaire to respondents. The questionnaire contains open and closed questions (structured). Open questions include questions whose answers are descriptions or are not provided while closed questions contain questions whose answers have been provided. The questionnaire is addressed to the respondents involved during the marketing process. Determination of marketing agency respondents conducted by snowball sampling method. The population in this study were beef and goat traders in Medan City Market Center, Petisah, Simpang Limun, Sei Kambing, and Brayan with the total of 49 people. To determine the number of samples used descriptive statistics by using the Slovin formula [8] as follows: Where : N = Total Population n = Number of Samples e = Allowance Rate (10%) The 10% clearance rate is used on a population basis not more than 2000 according to Sugiyono [9]. So the number of samples obtained is n = 32.8 = 33 samples. The samples obtained will be given a questionnaire question that has been provided. So the minimum sample used in this study were as many as 33 respondents. Sampling is done by simple random sampling that is sampling technique where
Sales Proportion Based on Goat Meat Sections
Proportion of beef sales during August -September 2017 in traditional markets in Medan City is presented in Table 1. Based on Table 1, it can be seen that the proportion of beef sales based on the parts sold in some traditional markets in Medan City, the leg got the biggest portion of seeling beef parts as much as 166,568.9 kilograms (60.24%), this is because in one cow the largest portion of meat that is thighs. The piece of beef in this section is very thin and more or less very tough. Usually this meat is used to mix meat pizza, beef steak, satay, rendang, corned beef and can be processed into all types of food. The average consumer is entrepreneurs such as meatball sellers, restaurants, restaurants and others, while the lowest is 3.126 kg (1.13%). This is because part of this meat chosen by the processor. Legs are used as a mixture of meatballs and soup sauce. However, the processors are usually more likely to use the chopped portion. Legs are also rarely used by consumers of housewives.
Sales Proportion Based on Goat Meat Sections
The proportion of sales is the number of sales of goods or services performed by the seller. The total proportion of goat meat sales in traditional markets in Medan City can be seen in Table 2.
Characteristics of marketing institutions and reasons for purchasing beef and goat parts
Characteristics of respondents observed in this study include the sex of respondents. Respondents involved in the marketing of beef and goat in traditional markets of Medan City have different gender, but dominated by male gender with 32 traders (65,30%). While the number of female traders amounted to 17 people (34.70%). The large number of males versus females associated with physical activity such as ordering, transporting and cutting by traders requires greater time and effort. From the results of the questionnaire showed that 100% of traders admit there is no special permission to buy meat from breeders or beef and goat sales companies. Although traders sell beef cattle from cattle fattening companies but cattle traders also receive and sell local cattle that are the result of livestock farming, especially in goat traders. Each market usually has one or more large merchants who are tasked with providing meat to smaller traders. The commonly used system is the wholesale system, i.e. the small traders do not have to buy one cow but can buy half the beef from one whole, or just buy a certain part. Although there are also some traders who buy their own cattle and cut it into government-owned "Rumah Potong Hewan" (RPH) or private.
As for goat meat does not apply bulk system but must buy one goat. 16.27% of traders get their beef from the government RPH while 83.73% of traders buy their cattle from cattle fattening companies or from local farmers and then slaughtering cattle in a private RPH. For example RPH Tani Asih with the source of cattle can come from PT. LAL or PT. Ariffa Global. For goat meat alone 100% traders get goat meat from local farmers. The one-week meat offer for the whole market is 68,666.6 kg with the number of meats sold during the week totaling 68,658 so the difference is 8.6 kg. The amount of meat this difference will be frozen and then adjusted to the amount of meat to be offered in the next week so as to achieve the balance of supply and demand in the market. For beef selling price is usually varied depending on the part of the beef Figure 1. Figure 1 showed that 41 of 43 traders (95.34%) of beef said that the thighs became part of the most sold meat in the supply and demand activities. While 2 out of 43 traders (4.65%) said the rest of the chopped and meat has become the most sold commodity. For the main factors determining the sale of beef cattle most often affected the consumers buying interest is a factor of the need for certain functions. That is, buyers buy the beef part for a beef processing purpose. The brief description of each part of beef as follows: (1) Leg : the main factor of consumer decision to buy meat of this section is the availability of this section is mostly on the market (90%) which supports all traders at the research site is able to sell this section. This meat section is also usually used as a variety of preparations, especially rendang beef which is often the main menu both in daily events and events and business activities processed beef. Another factor that affects the purchase of this section is the price that is below the price of the type of meat part has (10%); (2) Tenderloin and Sirloin: the main factor in the consumer's decision to buy the inner meat portion is the purpose of a certain function (90%), is some processed meat can only be served with the type of this meat. While the selection of sirloin because the processed meat can 5 have mixed with a little fat. Like steak and meat for satay. Sirloin was prefer than tenderloin cause the price is under the tenderloin (10%); (3) Ribs : the main factor of consumer decision makers to buy this meat part is the purpose of certain functions (100%), i.e. some processed meat can only be served with the type of rib meat. Usually the super ribs contain more carcass than ordinary ribs. So the super ribs are commonly used for processed soup ribs, gulai ribs, and etc. While the selection of ordinary ribs due to the usual price ribs that are below the price of super ribs is the reason some people use it mainly little seller players (75% of the voter meat ribs); (4) Chop: the main determinant of consumers to buy this meat part is the price is much cheaper (100%) for the processed meatballs industry, nuggets, sausages, and so on. Especially small and medium business actors. Other factors also suggest that this section is best suited for some types of processed foods; (5) Kikil and tail: the main factor to buy this meat part is the purpose of certain functions (100%), i.e. some processed meat can only be served with this type of meat like tauco Kikil, gravel curry and gravel soup, oxtail soup/tail; (6) Feet: the main factor to buy this meat part is the purpose of certain functions (100%), i.e. some processed meat can only be served with this type of meat such as goat leg soup. Another factor for the use of beef's feet is as an additional ingredient in some types of processed foods such as soup/sauce made from cow bone marrow; and (7) Goat head: the main factor to buy this meat part is the purpose of certain functions (100%), i.e. some processed meat can only be served with this type of meat such as goat head curry and goat's head soup.
Conclusion
The proportion of beef sales in the traditional markets in Medan City during the period of August to September 2017 was the highest was 166,568.9 kg (60.24%), while the lowest was 3,126 kg (1.13%). For the sales volume of goat meat in the traditional markets in Medan City, the largest part was the meat of 9,230.3 kg (88.6%), the lowest was the leg part of 427.2 kg (4.1%). There was a balance between supply and demand for beef in these traditional markets. The main determinant factor (90.90%) of consumers in deciding to purchase certain parts of beef and lamb meat that is based on certain functions using the meats.
|
2019-09-15T03:06:23.870Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "5b15cb5ba6be0dd7f7ddaa54a86c10a1cd2dc0f5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1175/1/012002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "046b793696818f01c704f4a347feaa393709ad3f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
}
|
261738763
|
pes2o/s2orc
|
v3-fos-license
|
Challenges in predicting future high-cost patients for care management interventions
Background To test the accuracy of a segmentation approach using claims data to predict Medicare beneficiaries most likely to be hospitalized in a subsequent year. Methods This article uses a 100-percent sample of Medicare beneficiaries from 2017 to 2018. This analysis is designed to illustrate the actuarial limitations of person-centered risk segmentation by looking at the number and rate of hospitalizations for progressively narrower segments of heart failure patients and a national fee-for-service comparison group. Cohorts are defined using 2017 data and then 2018 hospitalization rates are shown graphically. Results As the segments get narrower, the 2018 hospitalization rates increased, but the percentage of total Medicare FFS hospitalizations accounted for went down. In all three segments and the total Medicare FFS population, more than half of all patients did not have a hospitalization in 2018. Conclusions With the difficulty of identifying future high utilizing beneficiaries, health systems should consider the addition of clinician input and ‘light touch’ monitoring activities to improve the prediction of high-need, high-cost cohorts. It may also be beneficial to develop systemic strategies to manage utilization and steer beneficiaries to efficient providers rather than targeting individual patients.
organizations (ACOs), need to allocate resources as efficiently as possible, targeting resources towards patients who are the most likely to benefit.This includes efforts to triage and treat beneficiaries in the community to avoid hospitalization and to provide transitional support for those who have an admission.Currently, organizations use some combination of predictive algorithms, historical utilization and clinician referral to identify patients for care management [3].However, little is known about the efficiency or efficacy of these identification processes [4][5][6].Moreover, specific mechanisms contributing to possible inefficiencies in targeting resources to high-need, high-cost beneficiaries have not been identified or characterized in depth.We identified and investigated two such mechanisms arising in a heart failure segmentation
Background
Chronic disease accounts for most US healthcare spending [1].Hospitalizations associated with chronic diseases are expensive and increase risk of harm to frail older patients [2].Within Medicare value-based payment models, providers are incentivized to proactively engage patients and coordinate care to reduce high-cost service utilization such as hospitalizations.Given the investments required to operate care management programs, value-based providers, including accountable care algorithm used to identify Medicare beneficiaries who are more likely to have a hospital admission.
Despite the widespread adoption of risk stratification, there is limited evidence that care management can be targeted in a manner that consistently lowers net spending [7][8][9].Several studies suggest that Medicare ACO savings do not appear concentrated among patients with high or complex needs [10,11].What's more, historically under-served populations are less likely to be identified by data driven algorithms that rely on coding in administrative data [12].In order to investigate possible factors contributing to these results, we hypothesized two underlying mechanisms of inefficiency that may arise when risk-stratifying Medicare beneficiaries: (1) narrow cohort focus; and (2) utilization heterogeneity.
Narrow cohort focus.In forming our hypothesis, we first noted that progressively more restrictive segments should more successfully predict future high utilizing patients.However, because the more restrictive segments contain commensurately fewer beneficiaries, they represent less of the total hospital utilization in a covered population.If care management is only targeted to particular segments, opportunities to reduce hospital utilization for beneficiaries outside of the segments may be foregone.
Utilization heterogeneity.The second source of inefficiency arises because even within progressively more restrictive patient segments, there is still substantial heterogeneity in performance year hospital use.This means that within targeted cohorts a small number of beneficiaries may accumulate a considerable number of hospitalizations in the subsequent year, while a sizable proportion of the same cohort will not be hospitalized at all.Resources devoted to reducing hospitalizations for beneficiaries who would not have been hospitalized anyway may provide other benefits, but from a financial perspective the activity is unlikely to generate a favorable return on the investment.
This study used a heart failure segmentation algorithm.Heart failure is significant because it is highly prevalent in the Medicare population and because per-beneficiary spending for patients with heart failure is approximately twice the Medicare average [13,14].Moreover, there are disparities in care access and heart failure outcomes between African American, Hispanic and White beneficiaries, making any investigation of heart failure segmentation a potential avenue to address health equity.Specifically, development of more objective criteria for allocation of resources could overcome both conscious and unconscious bias in decisions related assignment of care managers to patients that might benefit.We specifically investigated inpatient hospital utilization for three increasingly narrow cohorts of Medicare heart failure patients defined using base year (2017) utilization.We then measured utilization in the subsequent performance year (2018) as well as total per-beneficiary, per-year spending.Finally, within each segment, we examined the proportion of beneficiaries with zero 2018 inpatient admissions, for whom care management may have potentially limited impact on total spending.
Methods
This observational cohort study was based on a 100% sample of Medicare claims and demographic data contained in a two-year window spanning 2017 and 2018.In the United States, Medicare provides health insurance for seniors aged 65 and over, with Medicare Part A covering hospital care, Part B covering outpatient care, and Part C (Medicare Advantage) representing the same coverage as Parts A and B but through an alternative commercial offering.The study population was Medicare beneficiaries eligible for attribution to an ACO (i.e., at least one qualifying Evaluation and Management (E&M) services from an ACO physician in 2017).To identify this group, we start with Medicare beneficiaries who were continuously enrolled in Medicare Parts A and B for the whole study period (2017-2018).We excluded beneficiaries with end-stage renal disease (ESRD), in part because ESRD patients are atypical from the rest of the heart disease cohort and would need to be assessed separately.Moreover, ESRD patients are often attributed to specialized ACO models, also necessitating separate analysis.We also excluded beneficiaries who died in 2017 and beneficiaries who enrolled in a Medicare Advantage plan (Part C) at any time in 2017 or 2018.We retained patients that died in 2018, provided they had continuous enrollment in Medicare Parts A and B prior to death.We treated 2017 as a base year in which we applied three progressively restrictive definitions of heart failure to create three cohorts within the study population.We then produced descriptive statistics to compare the performance year in the three heart failure cohorts using the overall study population described above as a basis for comparison.The three heart failure cohorts were defined as follow: Each of the more restrictive segments was also contained within the less restrictive segments, allowing the narrowing effects of the heart failure segmentation algorithm to be observed.It is important to note that we could have investigated alternative base-year inclusion criteria such as the top 5% or 1% spenders, or those that had zero hospitalizations in the base year.
Measures
Outcome measures include 2018 performance year allcause, acute-care hospitalizations per thousand beneficiaries.Admissions were identified using Medicare Part A bills with a type code of 60, indicating an acute-care stay.We also calculated the percentage of total study population hospitalizations in 2018, and the percentage of beneficiaries with zero 2018 hospitalizations per segment.To assess total cost of care, we calculated total per-beneficiary, per-year (PBPY) spending as the sum of all Medicare Part A and B Medicare reimbursements for both 2017 and 2018.
Analysis
We used the 2017 claims data to establish both the overall study population as well as the cohorts included in the segments described above.We then used the 2018 claims data to determine 2018 outcome measures for each segment.We plotted utilization histograms showing the distribution of 2018 hospitalization counts across the three segments.All distributions as well as the outcome measures were compared with corresponding variables from the overall ACO-attributable Medicare population.This study was reviewed by the Brandeis University IRB and deemed to be an exempt study.
Results
The study population consisted of approximately 26 million beneficiaries.As shown in Table 1, the least restrictive heart failure segment (Segment 1) yielded a cohort containing about 3.6 million beneficiaries while the more restrictive segments yielded about 1.8 million and 1.0 million beneficiaries for Segments 2 and 3 respectively.In general, the cohorts were similar in terms of reasons for eligibility, but the 2018 mortality rate was higher for the narrowest cohort (21% for cohort 3) compared to the most inclusive cohort (13% for cohort 1).Average annual 2017 Medicare spending for the narrowest cohort was $46,440 compared with $22,947 for the most inclusive cohort.Performance year spending was generally lower, with 2018 spending averaging $27,037 for the narrowest cohort compared with $18,391 for the most inclusive cohort.
Table 2 shows the 2018 hospital admissions for each heart failure cohort and for the overall study population (labeled All FFS beneficiaries).The study population comprised of 26 million beneficiaries accounted for about 6 million hospitalizations in 2018.The three progressively more restrictive segments contained commensurately smaller proportions of the total hospitalizations.Segment 1, the least restrictive segment, contained 2 indicates the degree to which the hospitalization rates are dramatically higher.For example, hospitalization rates in Segment 1 are twice the national average of 240 admissions per 1,000 beneficiaries.Hospitalization rates are nearly 3 and 4 time higher than the national average in Segments 2 and 3 respectively.Finally, as shown in Tables 2, 84% of beneficiaries in the study sample had zero hospitalizations in 2018, compared with 68%, 62% and 54% respectively in the three heart failure cohorts.
Figure 1 shows histograms of 2018 counts of hospital utilization for the overall study population and each heart failure cohort including the prevalence of zero 2018 admissions.Overall, the basic shape the distribution is the same despite the fact the admissions rates are much higher for the progressively more restrictive segments.
Discussion
Our study shows that increasingly restrictive heart failure segments do, in fact, isolate patients with dramatically higher rates of hospitalization-roughly 2, 3 and 4 times, respectively, compared to the overall study population.Thus, the restrictive segments may appear to provide viable targets for care management.However, as shown in Tables 1 and 2, less than one third of the hospitalizations from the overall population are represented in the largest segment, and that proportion drops to only 13% of total hospitalizations for the most restrictive segment.This illustrates the mechanism by which a narrow cohort focus misses substantial hospital utilization outside of the target group.By focusing on cohorts with higher likelihood of future hospitalizations, segmentation algorithms could exclude the majority of patients destined to be hospitalized in the performance year.Additionally, between half and two-thirds of beneficiaries in the 3 heart failure cohorts had zero hospitalizations in 2018.Thus, despite isolating cohorts with significantly higher hospitalization rates than the overall Medicare FFS population, the targeted beneficiary segments still have a high likelihood of not being hospitalized.This illustrates the mechanism of utilization heterogeneity.Assigning care managers to historically high-cost enrollees may provide significant patient benefits, [9] but our analysis suggests it may not be an efficient means for allocating resources towards reducing hospitalizations.What's more, these strategies ignore historically underserved populations who are less likely to have a broad array of diagnoses in claims or electronic health record data, further perpetuating disparities.
Even though this study investigated only one, relatively simple approach to segmenting beneficiaries, the methodology may be applicable to other risk stratification strategies.The key considerations are the proportion of beneficiaries who may be 'missed' and the proportion who may receive care management support that would not have incurred a hospitalization in the performance year even without such support.The two mechanisms discussed here are both ubiquitous and unavoidable.We know from actuarial science literature that beneficiary risk pools exhibit spending distributions similar to the utilization patterns illustrated in this paper [15,16].In other words, targeting algorithms include some degree of error.This may be driven by missing information related to the beneficiary's health status or the limitations in the statistical or data driven techniques used to identify the cohorts.This may also explain why clinician referral is frequently used as a compliment to data driven targeting methodologies.Analysis of clinician referral as a complement to the segmentation approaches is beyond the scope of the present study.However, given the challenges identified herein, future research may be directed toward quantifying the degree to which more real time referrals might remediate some of the challenges presently identified, paying special attention to personalized relationships between patients and providers that may not be captured in purely data-driven approaches.
Limitations
This study has several important limitations.First, it was based on claims data which contain limited information about patient medical conditions.Second, it only examined one approach to segmenting Medicare beneficiaries with heart failure.As indicated earlier, we could also have examined yet more restrictive (narrower) cohorts, and could also have considered broader segments, including segments exhibiting overall traits of good health (e.g.zero base year hospitalizations).We also note that many who exhibited zero hospitalizations in the performance year may still have been at significant risk for hospitalization, with that risk carrying over to subsequent years.Stated differently, one year may not have been enough time for that risk to be manifest as hospitalizations.Moreover, it is possible that care coordination resources-even those provided during a performance year of zero hospitalizations-could ultimately benefit those patients in subsequent years.Third, our study used a 100% sample of Medicare beneficiaries.By excluding Medicare Advantage (MA) patients, we may be missing important variables associated with the resources already allocated through the supplement benefits that may be provided to MA patients.In practice, a provider adopting a valuebased payment model would only see its own attributed lives, which may only approximate the underlying distribution.Finally, our approach assessed potential inefficiencies associated with beneficiary segmentation with respect to hospital admissions.However, care management also influences other types of utilization such as post-acute care.
Worth noting, the data supporting the findings of this study are available from the Center for Medicaid and Medicare Services (CMS) but restriction apply to the availability of these data, which were used under license for the current study and so are not publicly available.However, similar data are available upon request and with permission of CMS.
Conclusions
If all risk stratification approaches are limited by the two mechanisms described in this paper, it implies that risk stratification alone cannot efficiently target care management resources.This suggests a need for risk-based providers to augment segmentation strategies with additional information from clinicians, family members, patient surveys or remote biometric monitoring [17].Such information could support more adaptive, realtime decision making and deployment of resources such as paramedic or clinician home visits, rather than relying exclusively on a-priori base-year parameters to allocate care management resources.Although promising in concept, the few high-quality studies examining the impact of remote monitoring technologies on outcomes have shown mixed results [18].The Centers for Medicare and Medicaid Services recently established coverage for remote patient monitoring (RPM) services with maximum annual payments of $1,460 per patient [19].But at these payment levels, the use of RPM services may be subject to the same inefficiencies as traditional care coordination.Lower cost surveillance techniques may be necessary to support system-level changes.
ACOs and other provider groups need to consider systems-level solutions that promote efficient care for all beneficiaries, including those approaching, but not yet in, the late stages of chronic illness.Rather than focusing on individual high-cost patients, health care organizations could generate savings more effectively by implementing systematic initiatives to increase efficiencies -for example: implementing decision support systems to refer patients to efficient medical specialists, reducing capacity to perform overused procedures or establishing systems to shift care from hospital outpatient departments to physician offices when appropriate.However, strategies that target individual patients are more appealing to health systems that are predominantly fee-for-service.Systemic approaches to reduce spending may help them succeed in value-based contracts, but they will reduce their fee-for-service revenue [20].
Segment 1 Segment 2 Segment 3
Beneficiaries with at least one 2017 ambulatory (Part B) bill or a 2017 index inpatient (Part A) admission with a heart failure ICD-10 diagnosis code (N = 3.6 million).Beneficiaries with at least two 2017 ambulatory (Part B) heart failure bills 30-days apart or one 2017 index hospitalization with a primary diagnosis of heart failure (N = 1.9 million).Beneficiaries with one or more 2017 index hospitalizations with a primary diagnosis of heart failure (N = 1.0 million).
Segment 1 : 2 : 3 :
Beneficiaries with at least one ambulatory (Part B) bill or an inpatient (Part A) with a heart failure ICD-10 diagnosis code Segment Beneficiaries with at least two ambulatory (Part B) heart failure bills 30-days apart or one hospitalization with a primary diagnosis of heart failure Segment Beneficiaries with one or more hospitalization with a primary diagnosis of heart failure about 1.9 million or 30% of the 6 million hospitalizations.Segments 2 and 3 contained about 1.2 million (19%) and 0.8 million (13%) respectively of the 6 million hospitalizations in the study population.Table
Fig. 1 3 :
Fig. 1 Distribution of Hospital Utilization Across Study Cohorts, Count of hospital admission (0-6 or more) by Heart Failure Cohort, 2018 Notes: Segment 1: Beneficiaries with at least one 2017 ambulatory (Part B) bill or a 2017 index inpatient (Part A) admission with a heart failure ICD-10 diagnosis code (N = 3.6 million) Segment 2: Beneficiaries with at least two 2017 ambulatory (Part B) heart failure bills 30-days apart or one 2017 index hospitalization with a primary diagnosis of heart failure (N = 1.9 million) Segment 3: Beneficiaries with one or more 2017 index hospitalizations with a primary diagnosis of heart failure (N = 1.0 million)
Table 1
Characteristics of Heart Failure Cohorts in 2017 Segment 1: Beneficiaries with at least one ambulatory (Part B) bill or an inpatient (Part A) with a heart failure ICD-10 diagnosis code Segment 2: Beneficiaries with at least two ambulatory (Part B) heart failure bills 30-days apart or one hospitalization with a primary diagnosis of heart failure Segment 3: Beneficiaries with one or more hospitalization with a primary diagnosis of heart failure
Table 2
Beneficiary counts and outcome measures for all Medicare ACO Assignment-Eligible Beneficiaries and for Three Heart Failure Segments in 2018
|
2023-09-14T20:44:10.088Z
|
2023-09-14T00:00:00.000
|
{
"year": 2023,
"sha1": "b5c2b9d64efed8cf27e3763f576a143fa9ace4ab",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/counter/pdf/10.1186/s12913-023-09957-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ef6cfa8dd3b31341e386e0257ab72edffd9935d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
59017332
|
pes2o/s2orc
|
v3-fos-license
|
Research and Development on the Visualization System for Official and Scientific Management in Enterprise
A visualization system for the official and scientific management has been designed on C++. By this system, the information of database has been shown to users intuitively, which makes it possible for users to be free from the data sea. The system can realize the information mining and the integrated display, at the same time, improve the information’s constructive level and integration degree.
Introduction
Nowadays, enterprise is being influenced by information technology thoroughly. Along with the enhancement of informational level of the enterprise, various data of management systems are widely applied in the enterprise. Massive historical data in system are accumulated.
In face of the mass of data, customers often fail to find the data they want. By visualization technology, one can extract the needed data from various data systems, and display it in an integrated interface via graphics, tables and other forms intuitively.
In [1], Qian Zhu puts forward a PDM system of enterprise based on traditional PDM and web graph files for SQL Server 2000.
Concerning database security protection mechanism, Xianzhong Chen and Jian Zhang present several solution schemes for the authentication [2].
In this paper, we analyze the digital situation of the enterprise and the key technology for visualization. We can complete the visualization systems' development by using C++ Builder. The enterprise information's construction level and integration degree are improved obviously.
Overall Design of System
After user installed the visual management system, at first the system will verify identity of user, if the identity is correct, then the system allow to enter. Visual man-agement system connect enterprise database server by the enterprise network, and visit and dispose data information in the database, at last will return to the visual management system, then realize the display of visualization in the enterprise database. The total flow frame diagram of system information is shown as Figure 1.
The Realization of Database Management
The database management system is based on SQL Server 2000 and SQL Server 2005, to make sure the application program and the database can communicate with each other, some configuration should be made, for our system the PDM (Product Data Management) database is SQL Server 2000 system, BPM (Business Process Management) database is SQL Server 2005 system. According to the actual situation, we use BDE (Borland Database Engine)) controller of C++ Builder to visit PDM database, using ADO (Active Data Objects) database to visit BPM control. ADO can visit database by OLE (Object Linking and Embedding) DB (database), but the path of the database file must be known. If enterprise database stored on the different servers, it is inconvenient to obtain the file path and is liable to let out the secret. According to the actual situation of businesses we decided to access database by the ODBC (Open Database Connectivity)using ADO. BDE control is simpler to connect database, here we mainly introduce the configu-ration of ADO control connecting to ODBC of the database, which is as follows: 1) Open "control panel" of the operation system, find the "management tools", it provides the ODBC data source manager. Open the "data sources (ODBC)", add the data source in "system DNS" page, the data source program interface appears in Figure 2, next we choose SQL Server drive for database system.
2) Choose the driver, click "finish" button, then the interface as shown in Figure 3 will appear, named the new data source (here named Workflow), and select the database server.
3) After finishing the database configuration, create two database ZF_PDM and ZF Workflow in BDE Administrator which build in C++ Builder. Set SERVER NAME, DATABASE NAME, USER NAME. "SERVER NAME" is the server name or IP address which will be connected. DATABASE NAME is database name on the server which will be connected. USER NAME is the name of the user, generally set as "SA". After setting, right-click the new database, click "Apply". Then click the new database, connect database, when the database connects successfully, we can use the BDE control and ADO controls of C++ Builder to operate the database. 4) When completed the above settings, we can access the database via BDE control. Because the system is complex, here we introduce setting of the attribute for each control. Alias Name of Database control is set as ZF_PDM, Database Name is set as ZF_PDM1; Database Name of Query control is set as ZF_PDM1, the SQL statement is stored in SQL; Data Se of Data Source control is set as Query; Data Source of DB-Grid control is set as Data Source; Thus DB-Grid accessing database are realized. 5) Connect ADO control to database Double click "TADO Connection "controller, choose "Microsoft OLE DB Provider for SQL Server" in the "OLE DB Provider". Then input the address of the connected server in the "choice or input server name", input the user name and password, choose the database in the server, and test until connect successfully. It is shown in Figure 4.
Connection of ADO Query control is set as TADO Connection, store the needed implement SQL statement in SQL attribute, Data-Set of Data-Source control is set as ADO-Query; Set Data Source of DB-Chart control to visit the database, and can be shown by chart.
Detailed Design and Implementation of User Login Interface
The user should input "user's name" and "password", after logining successfully, the user can use any functions of the system. Login interface control processes is shown as Figure 5. This system will save the user's name and password to the text document in the folder of system program, and when the program is started, at first read the user's name and password from file, and judge whether the user's name and password are correct.
Logining to the main login interface of the system needs to realize the function: 1) Extracting user name and password from the specific file, and test and verify if the information is correct.
Here we detailed the code that verifying the user's input information. There are two steps: extract the user name, password and verify information that the user input. The part of the code is as follows: a) Extracting user name and password, the default user name is Administrator, only extracting password, the program code is shown as Table 1.
b) Verify the information code user input (can cut), it is shown as Table 2.
Achievement of Visualization Office Management Subsystem
The design and realization of visualization office sub- { Application->MessageBoxA("user name and password are error","input error",MB_OK); Edit1->SetFocus(); Edit2->Clear(); return; } } system start with the perspective of the user, then provide visual display of information to the user from various angles. According to the system framework and function design of visual office management, it includes the process status, process statistics, examining the process, the realization of the function is as follows: To-do and already do flow number statistics of each ministry departments, each place class departments, all the leaders and all staff, and list its detailed information by chart; Examining each departments, all the leaders and all staff's points and work contents; Generate flow diagram of various process, display the process of real-time dynamically state; Display the used process templates and template process instance. In order to realize the above function, we need to design the visual management subsystem in office in detail, at the same time we design related content for its every sub-module. After negotiating with the enterprise, and build up the detailed frame diagram of the visualization management subsystem in office finally, it can refer to the Figure 6.
All the detailed process of information are stored to the database of enterprise, it includes the subordinate departments, and launched department, approver and so on. Flow state module mainly use histogram to statistics the completion of every department what belongs to (total flow number, pending flow number and done flow number), the user can see what department operates good from the process. Then show the detailed information of each ministry departments, place class department and leadership in enterprise they examine and approve, the user can see the detail process that has been approved by department or examined by leaders. Click a process in the process list for details, then show the process of real-time dynamic flow chart. According to it the user can monitor the state of process. From this part the lead- ers may see the completed process of departments in enterprise, and the operation of the process they concerned. For example, click "administrative affairs" and show subordinate departments. List all processes that had been examined and approved on right hand.
Statistics of Process
The statistical process module mainly statistics each department and enterprise personnel (leaders and employees) had examined and approved the quantity of process information, including the total flow number, approval batch number and no approval batch number. The user can see the task and completed situation of each department and enterprise. Time control restrict time, make a list of the use of flow of templates in the limited time, and statistics the number of times, display the detailed information of process instance within the given time. Let the user see the process templates that had been used by enterprises often and not often, it is convenient for user to modify the process template.
Convenient to operate, we adopt the methods of displaying layer by layer. Click the ministry departments, display the data of statistical information that has been examined and approved by ministry department and its subordinate place class department. Click the place class department, display the process of statistical information that has been examined and approved by leadership and staff of place class department. For example, click "center of information", show the process of statistical information that has been examined and approved by place class department leadership Wang ** and liu king ** , as well as other employees Wan ** and Mu ** , etc., it is shown as Figure 7.
This module statistics the information of working intensity of each department and personnel in enterprise, statistics assessing score of each department and each enterprise personnel on the base of the information in the database. Click "processing evaluation" and enter into the process of assessing page. In order to realize the above functions, we should design the visualization research management subsystem in detail, and design related contents for its every submodule, by consulting with enterprise, and finally determine the detailed frame diagram the visualization research management subsystem.
Product project module shows the hierarchical relationship by tree structure, so that users can see the product structure in database visually, by opening up and drawing back the product three, and users can complete series of productive statistics, completed statistics and marked statistics. Among them, the series of statistics may count how many projects under each product series. Completed statistics contrast with the number of total projects, the number of finished projects and the number of unfinished projects. Marked statistics count the number of the project in the various technical phases. When click one product in the left of product tree, then display the detailed information under the project, such as name, model, stage, date created, founder, etc. Click a certain project under product tree, then show the detail information of the project, such as stage, date created, found, use and so on.
Document of the Project
Project Document reflects the relationship of projectdocument-process comprehensively. A project has many document, each document has the related process. By the module the number of each project and the key document can be display explicitly. Each specific document, the flow chart of each document and the record history can be browsed too. Among them, the flow chart marks the process node with four different colors, respectively denoting the completed, the ongoing, the former node and the to-be-done process node, it is easy to know the situation of the process by the flow diagram. BPM database, integrated visualization display of information is realized. Thus the leadership can obtain the concerned information intuitively and be free from the dilemma of "Be short of useful data by trapped in the data sea".
|
2018-12-18T20:50:10.169Z
|
2013-11-26T00:00:00.000
|
{
"year": 2013,
"sha1": "01b820e04e50a59a9f739fdec1fe0937f6f12095",
"oa_license": null,
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=39986",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "01b820e04e50a59a9f739fdec1fe0937f6f12095",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
18138668
|
pes2o/s2orc
|
v3-fos-license
|
Improving global estimates of syphilis in pregnancy by diagnostic test type: A systematic review and meta-analysis
Background “Probable active syphilis,” is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. Objectives To identify more accurate correction factors based on test type reported. Search Strategy Medline search using: “Syphilis [Mesh] and Pregnancy [Mesh],” “Syphilis [Mesh] and Prenatal Diagnosis [Mesh],” and “Syphilis [Mesh] and Antenatal [Keyword]. Selection Criteria Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. Data collection and analysis We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Main results Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Conclusions Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy.
Background
In 2088, WHO estimated that, worldwide, approximately 1.4 million pregnant women had "probable active syphilis" (PAS) or syphilis infections sufficiently active to result in motherto-child transmission (MTCT) and with the potential of subsequent adverse pregnancy outcomes [1]. Syphilis in pregnancy can be devastating and is associated with poor fetal or infant outcomes in the majority of cases, with an estimated 52% of PAS cases resulting in an adverse perinatal outcome attributable to syphilis [2]. PAS (defined as seroreactivity for both non-treponemal and treponemal tests) is used as the reporting measure by WHO since surveillance data typically do not include clinical information.
Currently, no single test or combination of tests accurately predicts the extent to which maternal syphilis infection in pregnancy will affect the fetus. However, serologic tests can be suggestive; the combination of a reactive non-treponemal test (e.g. rapid plasma regain [RPR], venereal disease research laboratory [VDRL]) and a reactive treponemal test (e.g. Treponema pallidum particle agglutination [TP-PA], T. pallidum hemagglutination assay), defined in the 2008 WHO estimates as PAS, is compelling evidence for an infection that may result in MTCT. Neither type of test is both sensitive and specific on its own. A reactive, but unconfirmed, non-treponemal test may represent a biological false-positive result, whereas a reactive treponemal test alone may represent an old or previously treated infection that poses little exposure risk for the fetus. Considered schematically (Table 1), individuals with a positive result in both test types are likely to have syphilis (Cell A). Those with a single positive result in either test type could have syphilis, but might have falsepositive or past-treated infection (Cells B and C). Those with negative results in both test types are unlikely to have syphilis (Cell D).
WHO estimated that untreated syphilis in pregnancy resulted in approximately 521 000 adverse perinatal outcomes globally in 2008, including an estimated 212 000 stillbirths, 92 000 neonatal deaths, 65 000 preterm or low birth weight infants, and 152 000 syphilisinfected newborns [1]. Health outcomes were modeled based on the published literature on MTCT risk of syphilis transmission [2] and national data reported to WHO from 147 countries on antenatal clinic (ANC) attendance (at least one visit) and from 97 countries on materna syphilis seropositivity among ANC attendees through the WHO/UNAIDS Global AIDS Response Progress Reporting System (GARPR, formerly known as HIV Universal Access Reporting: http://www.unaids.org/en/dataanalysis/knowyourresponse/ globalaidsprogressreporting/). Maternal syphilis seropositivity data reported to WHO varied across countries, generally falling into four categories (Table 2). Category 1 included countries reporting the number of maternal syphilis cases reactive to both non-treponemal and treponemal syphilis tests (PAS); Category 2 included countries reporting cases reactive to non-treponemal syphilis tests only (i.e. no confirmatory treponemal testing reported); Category 3 included countries reporting cases reactive to treponemal tests only (i.e. no confirmatory non-treponemal testing reported); and Category 4 included countries for which the type of laboratory test used was not reported.
In the 2008 estimates on burden of syphilis in pregnancy, WHO applied a correction factor assuming that 65% of all reported seropositive cases among pregnant women, regardless of test type, had infections that could lead to MTCT (PAS). A correction factor was necessary since 97% (188 of 193) of countries reporting to WHO had not reported on the test type used (Category 4), and many may have included only one test type (treponemal or nontreponemal) in their case definition. The correction factor was based on data from three ANC studies in which both non-treponemal and treponemal test results were reported [3][4][5], allowing calculation of the proportion of seropositive women in either test type expected to be reactive for both non-treponemal and treponemal tests (i.e. A/(A + B + C), Table 1). This estimation is best suited for Category 4 countries. However, for countries in Categories 1-3, more precise correction factors can be calculated. In this analysis, we sought to identify more accurate correction factors for future estimates of global burden of syphilis MTCT and resultant adverse pregnancy outcomes when test type data are available. Correction factors calculated were the estimated proportion of pregnant or puerperal women with reactive nontreponemal tests that had reactive treponemal tests (correction factor for Category 2 countries), or the proportion of pregnant or puerperal women with reactive treponemal tests that had reactive non-treponemal tests (correction factor for Category 3 countries).
Materials and methods
For this meta-analysis, we reviewed the published literature to identify country-level studies reporting maternal syphilis seropositivity results for both treponemal and non-treponemal tests on all patients in order to estimate the likelihood that a single unconfirmed syphilis test would also be positive for the alternative test type, had it been conducted.
To identify studies, we conducted a systematic Medline search using the terms: "Syphilis
Inclusion criteria
To be included, eligible studies must have tested pregnant or puerperal women for both nontreponemal and treponemal serology and reported at least one of the following: the proportion of pregnant or puerperal women with reactive non-treponemal tests that had reactive treponemal tests (correction factor for Category 2 countries) or the proportion of pregnant or puerperal women with reactive treponemal tests that had reactive nontreponemal tests (correction factor for Category 3 countries). Studies were included regardless of type of non-treponemal (e.g. RPR, VDRL) or treponemal (e.g. fluorescent treponemal antibody absorption, TP-PA) test used, publication language, country, or age of subjects.
We used these data to estimate maternal syphilis seropositivity for countries reporting data to WHO based on a single test type (Categories 2 and 3), or that did not report the test type used (Category 4; Table 2). For Category 1 countries, we assumed that reported data should be used without correction since these are the best possible estimates for PAS cases in pregnancy when only test type (no clinical or titer) data are available. For Category 2 countries, we used the published literature to calculate estimates and 95% confidence intervals (CIs) for the proportion of pregnant women with reactive non-treponemal tests that also had reactive treponemal tests (i.e. A/(A + B) from Table 1). For Category 3 countries, we used the published literature to calculate estimates and CIs for the proportion of pregnant women with reactive treponemal tests that also had reactive non-treponemal tests (i.e. A/(A + C) from Table 1). For Category 4 countries, we assumed an equal probability of having used only non-treponemal, only treponemal, or a combined test strategy. Thus, we used the average of the estimates for the three correction factors for Categories 1 -3 to estimate the number of PAS cases ((Category 1 correction factor + Category 2 correction factor + Category 3 correction factor)/3). The estimated proportions for each WHO reporting category represent the correction factors to be used for their respective categories.
Statistical analysis
For each study identified from the literature review, based on the reported data, we manually retrieved or calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal (Category 2) and among subjects with reactive non-treponemal (Category 3) tests and corresponding 95% CIs for the assessed outcomes. We summarized the percent estimates using random effects models, which take into account the presence of between-study heterogeneity into the calculations. This approach was chosen over a fixed effects model since the underlying syphilis prevalence and other factors were different in each population studied.
Results
The MEDLINE search identified 514 studies along with two of the three studies identified in the WHO 2008 estimates literature search that met our inclusion criteria [1]. Of the 516 studies screened for eligibility, 29 met the criteria and were included in the analyses [3,5-32] (Fig. 1). Studies could be included in more than one analysis depending on what type of results were reported: once for estimating the correction factor for Category 2, once for Category 3, and all were included in the Category 4 estimate. In total, 24 of the 29 studies reported A/(A + B) results (Table 1) [3,33,34], representing 1896 women used to estimate the correction factor for Category 2 countries; and 13 of the 29 reported A/(A + C) results [3,5,8,9,14,16,18,[26][27][28][29][30]33], representing 1132 women included for the estimate for Category 3 countries. The studies were conducted in various clinical settings (e.g. hospitals, ANC clinics, rural clinics, urban clinics) and represented 22 countries. The study estimates and CIs for Category 2 and 3 countries are shown in Fig. 2.
Following pooling of the results from individual studies and accounting for within-and between-study variation using the random effects model, the correction factor for Category 2 countries was estimated to be 52.2% (95% CI, 38.0-66.6), indicating that an estimated 52.2% of the syphilis cases in pregnancy reported to WHO by these countries were likely to have PAS (Table 2). Using the random effects model, the pooled correction factor for Category 3 countries was quite similar, calculated as 53.6% (95% CI, 36.9-70.2; Table 2). As previously discussed, Category 1 countries reported the best possible estimates as data were based on both treponemal and non-treponemal testing results, and thus the correction factor was set as 1.0. For Category 4 countries, we used the average of the correction factors calculated for the first three categories, and the correction factor was calculated as 68.6% (95% CI, 61.3-78.9; Table 2). Thus, an estimated 68.6% of cases in pregnancy reported by these countries were likely to have been PAS.
Discussion
This analysis was conducted to improve future estimates of the global burden of syphilis in pregnancy and the related adverse outcomes. The meta-analysis results indicate that, among countries reporting maternal syphilis infections using a single test result, regardless of test type, an estimated 53% of cases represent sufficiently active infections to result in transmission of syphilis from mother to fetus. For countries not reporting test type, approximately 69% of cases are estimated to have sufficiently active infections to result in MTCT. Had the correction factors calculated herein been used in the 2008 WHO estimates, there would have been an increase in syphilis cases in pregnancy (1 408 811 vs 1 473 152 infections, or a 4.6% increase), and a proportionately similar increase in associated outcomes. Nevertheless, despite the difference between using a uniform or a variable correction factor based on reported test type not being significant in 2008, testing practices within countries may evolve over time and, thus, this may not always be the case. Furthermore, efforts are being made by WHO and UNAIDS to improve maternal syphilis seropositivity test type reporting, which will allow for improvements in the accuracy of estimates. Accurate estimates are important to evaluate progress in global and regional congenital syphilis elimination initiatives as well as for strategic planning [33].
Serologic testing is inherently imprecise in identifying infectious syphilis. Positive predictive values of tests vary according to population prevalence, clinical stage of disease, prior history of disease and treatment, and quality of laboratory testing. In pregnancy, MTCT risk can be infiuenced by co-infection with malaria or HIV [34]. Health systems with accurate laboratory testing and strong antenatal programs are likely to better identify true syphilis cases earlier in the course of pregnancy, leading to disease prevention. In settings with a stronger public health infrastructure, unconfirmed reactive treponemal tests are likely to represent previously treated syphilis infections; while in settings with weak testing and treatment infrastructures, unconfirmed reactive treponemal tests are likely to represent untreated syphilis. A clinical history can help distinguish previously treated from newly infected cases; however, these data are not available in WHO (or most national) surveillance systems, and their inclusion in routine surveillance is impractical.
It must be noted, however, that this study is not without limitations. First, the studies included in the meta-analysis varied in their setting (urban vs rural), underlying syphilis and other disease prevalence, and available health care and laboratory infrastructure. Further, despite having estimated the results using a random effects model, the correction factors are unlikely to be generalizable to every individual locale, country, or region. In particular, the underlying prevalence of syphilis in pregnant women will greatly affect the correction factor in Category 3. Second, relatively few studies in the published literature reported syphilis seropositivity in pregnancy for both treponemal and non-treponemal tests. It is hoped that, over time, more study data will be available to further refine the correction factor estimates.
Third, although a structured search was performed, the possibility of unpublished studies showing different results leads to a likelihood of selection bias in the studies included in the meta-analysis.
Despite these limitations, our study describes how estimates of maternal syphilis can be improved by correcting for test type. While not perfect, the correction factors calculated herein represent a step toward improved accuracy in estimating the global burden of syphilis infections in pregnant women and resultant perinatal health outcomes. This updated methodology, along with improvements in global reporting of test types, development of more sensitive and specific syphilis tests, and improved access to syphilis diagnostics in resource-poor settings, are likely to improve the global estimates of syphilis in pregnancy and associated outcomes in the future. Although this study focuses on maternal syphilis, the methodology could be applied to other global disease estimates where biomarkers are used to measure burden of disease. Flow diagram of study selection. Ham
|
2018-04-03T05:47:11.041Z
|
2015-04-25T00:00:00.000
|
{
"year": 2015,
"sha1": "a15931410e0cb99b7e464f57ab2f1c02e9e95b03",
"oa_license": "CCBYNCND",
"oa_url": "https://obgyn.onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.ijgo.2015.04.012",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a15931410e0cb99b7e464f57ab2f1c02e9e95b03",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246857986
|
pes2o/s2orc
|
v3-fos-license
|
Implementation of School-Based Management to Improve Education Quality at MAN 6 Pidie
This study aims to determine the implementation of school-based management to improve the quality of education in MAN 6 Pidie. This study is a case study which is focused on one phenomenon that is selected and understood in depth. The data was collected using observation, interview and documentary studies. The subjects were principals, vice principals, teachers and school committees. The data analysis technique was carried out through two activity, namely data presentation and data verification. The results of data analysis show that the implementation of schoolbased management by the principal at MAN Negeri 6 Pidie is excellent in accordance with the policies and planning. In addition, school principal is always responding and looking for solution to every problem in implementing schoolbased management (SBM).
INTRODUCTION
To date, the problem that has not been overcome yet in education in Indonesia is the low quality of education. The low quality of education is not about unclear reason. This is due to the uneven distribution of government policies and efforts in in all educational units, ranging from provincial and city/district level to remote areas. Various efforts ihave ibeen imade ito iimprove ithe iquality iof inational ieducation ithrough ivarious itrainings, iimprovement iof iteacher icompetence, iprocurement iof ibooks iand ilearning itools, iimprovement iof ieducational ifacilities iand iinfrastructure, iand iimproving ithe iquality iof ischool imanagement. i Since ithe ieducation aspect iis i inecessary ito iimprove icontinuously, iit ineeds ian iapproach iin iimproving ithe iquality iof ieducation. iIt iis ihoped ithat iempowerment, imaturity iand iindependence ias iwell ias ithe iquality iof ithe ination ican ibe improved through education. Because ieducation iis ione iaspect iof ilife ithat iis ifunctional ifor ievery ihuman ibeing iand ihas ia istrategic iposition ito ieducate ithe ination's ilife [1]. This approach is known as the concept of education management (SBM). SBM basically gives schools freedom to carry out all activities related to the implementation of education to achieve educational goals effectively and efficiently. The application of school-based management is not only to bring the changes of management system in schools but also to influence policies and orientation to the community towards the implementation of education to improve the quality of education in schools.
SBM can be an alternative in improving the quality of education. SBM has been implemented in various countries. It basically consists of (1) decentralization principle, the delegation of authority to regions and schools to manage their education autonomously in the development of national education, (2) empowerment of educational resources including participation and empowerment of parents and the community in developing education, (3) the existence of a school committee board that monitors or organizes the provision of facilities and supervision in the management of education [2].
SBM is a policy set by the government in education that regulates and allows schools to make policies and manage their own households. SBM is a step that is considered the most effective and profitable to improve the quality of education in schools. Schools are given the freedom to make their own policies according to the needs and conditions of the school environment.
SBM aims to improve the quality of education, especially in the regions, because schools and the community do not need to wait for orders from the center but can develop an educational vision according to regional conditions and carry out the educational vision independently. The purpose of implementing SBM is to improve management efficiency as well as the quality and relevance of education in schools [3].
Management is a process that utilizes all existing resources effectively and efficiently to achieve a goal/target [4]. Management is an art, the art of managing all existing resources, both human resources and other resources, according to their respective functions so that they can effectively and efficiently achieve goals [5]. According to D. iAndriany [6] management is a process of achieving organizational goals effectively and efficiently through planning, organizing, leadership, and monitoring organizational resources. Management is indispensable in modern social organizations characterized by scientific thinking and educational innovation. Overall, management is a process that regulates activities or behavior so that it has a good effect or art of directing others to achieve the main goals of an organization or business through the process of planning, organizing, managing, and controlling resources by means of effective and efficient.
SBM aims to improve the effectiveness and efficiency of education. Effectiveness relates to appropriate and effective educational processes and outcomes as planned. The effectiveness of a school is surely known after releasing the results. On the other hand, to achieve good results, efforts are made to apply indicators or characteristics of effective schools [8]. By implementing SBM, it is hoped that every school, according to their respective conditions, can apply appropriate learning methods and media. As such, learning can run smoothly and effectively and is on target to improve the quality of education. Efficiency relates to the capital or costs incurred within limits that are not too large or small, but can fulfill all processes and maximum educational outcomes.
According to O. iE. iMuslihah [9], in order to run optimally, a strategy is needed in implementing SBM, including: (a) Grouping, schools can be grouped based on the ability of schools to manage their schools, thus it will be easier to know which schools require more attention, (b) Phased, the implementation of SBM is carried out through stages, starting from the short-term stage to the long-term stage, (c) Implementation, the trial implementation run first prior to the permanent implementation which requires binding regulations. Overall, the implementation of SBM is essentially choosing the best alternative for schools in developing their schools. SBM must carry out continuously to later result in an improvement of education.
Various supports from the components of education are needed to improve the quality of education as desired. At least, schools must have the following characteristics, namely: (a) Strong School Leadership, principals as leaders must able to utilize and carry out their functions to encourage and make decisions in improving school quality, (b) Effective Management of Education Personnel, education personnel, especially teachers, need to be managed properly, starting from their needs when teaching, participating in trainings to increase their teaching ability, evaluating teacher performance, and giving rewards for their services,(c) Schools Authority, schools have their own authority to improve themselves so that they can develop their respective abilities. (d) Schools Openness, openness means transparency in its management process, such as in the process of making a decision, in the use of school finances, and in evaluating the implementation of activities, (e) Good Communication, schools must establish good communication with the school's internal parties and external parties of the school. This well-established communication is intended so that all school later activities can be carried out properly because of the involvement of all existing components.
METHODS
This istudy iuses ia idescriptive imethod iwith ia iqualitative iapproach, inamely idescribing iand ianalyzing ithe iapplication iof SBM in iimproving ithe iquality iof ieducation iat MAN 6 Pidie. The research was conducted on June 13, 2021. Respondents were principals, vice principals of curriculum, teachers and administration staff. The criteria of the respondents are those who are able to answer / provide answers from researchers related to SBM at MAN 6 Pidie. The principal was Advances in Social Science, Education and Humanities Research, volume 640 involved because he was able to provide answers about the procedures for implementing SBM at MAN 6 Pidie. Vice Principals of curriculum, helped to complete the answers from the principal regarding SBM at MAN 6 Pidie. The teachers provided answers related to the results after the implementation of SBM in Man 6 Pidie. The staff helps finding data related to SBM.
RESULTS AND DISCUSSION
In discussing the results of this study, efforts will be made to interpret the results of research findings in the field that have been obtained. This is based on a perception that the main purpose of qualitative research is to gain meaning for the reality that occurs. Furthermore, a systematic discussion of the results of this research will be presented as follows: The work program of the madrasah principal in realizing various educational programs at MAN 6 Pidie was clearly seen as viewed from the realization of various activities such as: (a) curriculum and teaching, (b) education staff, (c) students (student management), (d) finance and financing, (e) facilities and infrastructure, (f) school relations with the community, and (g) special services. Not all madrasah principals understand and understand the purpose of leadership, the qualities, and the functions that must be carried out by leaders, especially in SBM.
The person who holds the position as a head of the madrasah is the education leader. S. iAminah, iA [12] stated that the duties and responsibilities of the madrasah head can be classified into two area: (a) the duties in administrative field, and (b) the duties in supervisory field. The duties of the madrasah principal in administration including managing teaching, staffing, students, school buildings and grounds, school finances, school and community relations. The itasks iin isupervision iinclude iproviding iguidance, iassistance, isupervision iand iassessment ion iproblems irelated ito ithe itechnical iimplementation iand idevelopment iof iteaching ieducation to iimprove iteaching ieducation iprograms iand iactivities ito icreate iteaching iand ilearning isituations.
The way principal works and views his role is influenced by his personality, professional preparation and experience, and the decisions made by the school regarding the role of the madrasah principal in teaching. Education services in the office for school administrators can clarify expectations for the role of the madrasah principal.
According to S. iRizal [13], the head of the madrasah has 11 kinds of roles, namely executor, planner, an expert, supervor for the relationship between members, represtation of the group, a rewarder, a referee, responsibility holder, a creator and a father.
Strategy for the Implementation SBM at MAN 6 Pidie
The strategy for implementing SBMt at MAN 6 Pidie includes the following aspects: (a) stages of socialization, (b) formulation of the school's vision, mission and goals, (c) identification of the school's real challenges, (d) goals/objectives situational, (e) functions that need to be involved to achieve the target, (f) SWOT analysis, (g) alternative problem solving steps, (h) preparation of quality improvement work plans and programs, (i) program iimplementation iand ievaluation, iand i(j) iformulation of inew iquality iobjectives.
In iaddition, it is important to note that schools and communities need to be involved in the SBM process as early as possible. They need not just wait, but involve themselves in discussions about SBM and take the initiative to organize related aspects. Basically, changing the central-based management approach to SBM is not a very easy matter, but it is a process that takes place continuously and involves all elements responsible for the implementation of school education.
Obstacles Faced by Madrasah Principals in Implementing SBM at MAN 6 Pidie
The obstacles faced by madrasah principals in implementing SBM at MAN 6 Pidie can be identified through the indicators of (a) school independence, (b) participatory decision making, and (c) management transparency. Further, they are described as follows:
Obstacles of SBM in school independence
In order to show its independence according to the essence of school autonomy, schools try to meet their needs together with school committees without relying on government support. Schools raise funds to get their own funds (self-funding) so that the education process at schools can run smoothly. Furthermore, schools try to manage their own funds effectively and efficiently. There is a priority scale in carrying out school goals that have been determined. In carrying out various school activities to implement education and improve quality, schools try to carry out their own (self-employment) without asking for instructions.
Obstacles of SBM in Participatory Decision Making
The principal as the central figure in the school has a very important role that will determine the atmosphere in the school, and the regulations that will be applied through the right decision-making process. In making decisions, the principal must be wise before the decision is socialized to the school community, because what is conveyed by the head of the madrasah is always heard and will then be implemented by the school community.
The role of the madrasah principal is very large which will have a huge impact on life at school. The role of the head of the madrasa include as an administrator, educator, leader and motivator of his subordinates. From this context, the head of the madrasah has a very big influence in school life, because the head of the madrasah is considered a leader who provides a good example.
Overall, the leadership style of madrasah principals in decision-making socialization activities is very useful in providing thoughts on how to deal with various styles of decision-making. Good school management is capable of producing quality school decisions, both quantitatively and qualitatively [12]. There is no better school management, except which can achieve positive, rational, and objective changes for school organizations.
Therefore, the skills of the madrasah principal as a manager in decision-making activities are a demand for competencies that must be possessed and a demand for quality management to encourage the development of organizational and management programs. Thus, the skills needed by managers in decision-making activities are: (a) cognitive skills, (b) data collection and processing skills, (c) communication skills, (d) influencing skills, and (e) managerial skills. It is clear that madrasah principals develop school excellence starting from planning to evaluation, so that schools can realize school excellence. Next, they can adapt to the development of science, technology according to their needs of developing the quality of human resources.
Obstacles to SBM in Transparency Management
The transparency of madrasah principals in the implementation of SBM can be seen from the openness in formulating and deciding a policy that involves school elements. These transparent activities include: (a) identifying real challenges faced by schools, (b) identifying the readiness level of functions and their factors in a SWOT analysis, (c) determining alternative problem-solving steps, preparing plans and work programs for quality improvement in shorts term (one year ahead), (d) implementing the plenary meeting of the school committee at the beginning of the new school year, attended by all parents of students, committee members, community leaders and relevant government officials, with the main agenda of ratifying the RAPBS, (e) conducting continuous coordination, (f) creating inventory of activities types and activity implementers, (g) placing personnel in accordance with the assigned type and workload, (h) discussing the allocation of funds for each activity, (i) providing a place/board of information regarding various school matters, (j) accepting criticism and suggestions from the public on school performance.
CONCLUSION
It can be concluded that the madrasa principal's work program in educational activities at MAN 6 Pidie has Advances in Social Science, Education and Humanities Research, volume 640 been functioned properly and correctly. In education, staff management, financial management and financial roles have not been carried out optimally.
The strategy for implementing school-based imanagement at MAN 6 Pidie is carried out through: (a) the socialization stage, (b) the formulation of the school's vision, mission and goals, (c) nvolvement of a number of educational resources for the achievement of the school program, (d) the execution of a SWOT analysis on education programs that have been implemented, (e) the preparation of plans and work programs for quality improvement, and (f) the evaluation of program implementation. Obstacles faced by madrasah principals in implementing SBM include school independence and budget management which have not been implemented in a transparent and accountable manner.
|
2022-02-16T16:05:51.089Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a2b38d270385f6226306f67503acdb67c558471d",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125969637.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eea8c0fd2c1fbe74fe2be0247c347f78b8e13f89",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
17760954
|
pes2o/s2orc
|
v3-fos-license
|
Immune-enrichment of insulin in bio-fluids on gold-nanoparticle decorated target plate and in situ detection by MALDI MS
Background Detection of low-abundance biomarkers using mass spectrometry (MS) is often hampered by non-target molecules in biological fluids. In addition, current procedures for sample preparation increase sample consumption and limit analysis throughput. Here, a simple strategy is proposed to construct an antibody-modified target plate for high-sensitivity MS detection of target markers such as insulin, in biological fluids. Methods The target plate was first modified with gold nanoparticle, and then functionalized with corresponding antibody through chemical conjugation. Clinical specimens were incubated onto these antibody-functionalized target plates, and then subjected to matrix assisted laser desorption ionization mass spectrometry analysis. Results Insulin in samples was enriched specifically on this functional plate. The detection just required low-volume samples (lower than 5 µL) and simplified handling process (within 40 min). This method exhibited high sensitivity (limit of detection in standard samples, 0.8 nM) and good linear correlation of MS intensity with insulin concentration (R2 = 0.994). More importantly, insulin present in real biological fluids such as human serum and cell lysate could be detected directly by using this functional target plate without additional sample preparations. Conclusions Our method is easy to manipulate, cost-effective, and with a potential to be applied in the field of clinical biomarker detection.
Background
The level of insulin in the blood indicates explicitly the function of endocrine beta cells, and thus the metabolism situation of carbohydrates and fat. Thus, insulin is recognized as a marker protein for diagnosis of various types of diabetes and related diseases [1][2][3][4][5][6]. The detection of insulin in diabetes samples has been used in earlydiagnosis, monitoring disease progression, prognosis and pathology research [2]. The progression of technique for insulin analysis will undoubtedly improve the early diagnosis for relative disease and facilitate the follow-up therapy [3,4].
Currently, insulin analysis is based on a number of methods, including chemiluminescence immunoassays [7], radio immunoassays [8], immune affinity chromatography-LC/MS/MS methods [9], surface plasma resonance immunosensors [10], capillary electrophoretic immunoassay [11], and immune enzymometric assays [12]. However, most of these methods are time-and labor-consuming, of low analysis throughput, and use hazardous reagents such as the radioactive labels.
Matrix assisted laser desorption ionization mass spectrometry (MALDI MS) is a powerful tool for the detection and analysis of biomarkers [13]. Owing to the remarkable features such as high-throughput, high sensitivity, label-free analysis and ease to manipulate, MALDI MS-based analysis has gained considerable interest and is now widely utilized in different fields of biomolecule analysis (including discovery, identification
Open Access
Clinical Proteomics *Correspondence: yanli001@gmail.com 1 Laboratory of Interdisciplinary Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China Full list of author information is available at the end of the article and monitoring), and even as a diagnostic platform [14][15][16][17]. Nevertheless, directly detecting low-level biomolecules such as insulin is hampered by the complex composition of real biological fluids. The concentrations of target peptide or protein in blood generally range from nM to μM [18], which were much lower than many other non-target molecules and thus very hard to detect or quantify. In addition, the performance of MS is usually hampered by the high content of salts in biological fluids, known as the effect of ionization suppression [19]. Therefore, several immune-MALDI MS methods have been developed to pre-concentrate target analytes and increase analyzing efficiency. These methods generally applied immune-affinity column [20] or antibody-conjugated magnetic beads [21,22] to capture target biomolecules, followed by MALDI MS analysis. However, these strategies required additional steps of centrifugation, sample transferring, or column-fractions, which increased sample consumption and limited the analysis throughput [13].
One solution is to specifically enrich target molecules on the MS target plate, to avoid additional handling processes such as column purification or sample transfer. A small number of studies have reported the use of this strategy to detect insulin [23][24][25]. In these reported methods, the antibody of target analyte was immobilized on 2-dimension planar substrate by using complicated chemical reagents [23,24] or non-specific adsorption [25]. However, the surface area of 2-dimension planar substrate was restricted, limiting the number of immobilized antibodies and the approach of target analytes [23][24][25]. Meanwhile, the complex organic chemicals such as dextran [24] may also introduce more non-target signals to MS analysis. Thus, the sensitivities of these strategies were generally in micromole range, hindering a more wide-spread application for the analysis of real biological fluids [23,25]. Therefore, new design of target plate substrate is still needed to direct analyze real biological fluids using MADLI MS with minimum sample preparation.
Here, to achieve facilitated and sensitive insulin detection, we designed a simple strategy to immobilize insulin antibody directly onto a gold-nanostructured MALDI plate by means of chemical conjugation. The 3-dimension non-planar nano-surface increased surface area, and improved the efficiency of antibody binding, thus enhanced the sensitivity of analysis. Meanwhile, the simple small-molecular chemicals used in this platform avoided introducing interference signals in the MALDI MS. Compared with most previous immune-MALDI MS methods [20][21][22], this platform could simultaneously enrich and detect target insulin in biological fluids without any off-plate purification by affinity-beads/column, requiring a very small quantity of samples and oversimplified handling process.
Preparation of GNP
The gold nanoparticles (GNP) were prepared according to the previous literature [26]. All glassware used was cleaned in aqua regia solution (HCl:HNO 3 = 3:1) and then thoroughly rinsed by distilled H 2 O. Briefly, 100 mL of 0.01% HAuCl 4 was heated to boiling in a round-bottom flask equipped with a condenser. Then 1.3 mL aqueous solution of sodium-citrate (1%) was added under vigorous stirring. In about 25 s, the solution turned blue; in approximately 1 min, the blue color change to red-violet gradually. After that the solution was kept boiling for an additional 10 min, and then cooled at room temperature. The prepared nanoparticle was stored at 4 °C.
Surface modification of ITO slide Silylation of ITO slide
The ITO glasses were cut into 25 × 37.5 mm slides and cleaned by sequential sonication in acetone for 20 min, in isopropanol for 20 min, in soap water for 15 min, and twice in distilled water for 10 min. The cleaned ITO slides were immersed in 5 M NaOH for 8 h at room temperature and then flushed thoroughly with distilled water and dried under N 2 stream. Afterwards they were immediately treated in a freshly prepared APTMS solution (3% in methanol) for 2 h. The resulting APTMS-ITO slides were sonically cleaned in methanol for three times and dried under N 2 stream.
Deposition of GNP on APTMS-ITO
The APTMS-ITO slides were immersed in GNP solution for at least 8 h at 4 °C prior to flushing with distilled water and dried under N 2 stream.
Derivatization of carboxyl group on GNP-ITO slide
The GNP deposited ITO (GNP-ITO) slides were immersed in a solution of MUA in ethanol (10 mM) for 8 h, rinsed with ethanol, and then dried under N 2 stream.
Immobilization of protein on MUA-GNP-ITO
To covalently attach protein on these two kinds of substrate, an aqueous solution of EDC (75 mM) and NHS (15 mM) was first applied to treat the slides for 25 min at room temperature. Then the protein solutions (antiinsulin or BSA in pH 7.4 phosphate buffer) were spotted directly on the proper locations of the slides using pipettor. After all samples were spotted, the slides were laid in a sealed humid bottle at room temperature for at least 2 h to complete coupling reaction. Then an aqueous solution of EOA (1 M, adjusted with 5 M HCl to pH 8.6) was applied to treat the slides for 1.5 h to block the unreacted carboxyl groups. Finally, the anti-insulin or BSA modified ITO slides were flushed thoroughly with water to clean the unbound proteins and dried under N 2 stream.
Insulin immune-reaction and MALDI-TOF analysis
Human insulin (0-72 nM) was dissolved in pH 7.4 phosphate buffer (10 mM) or PBS buffer containing albumin (35 mg/mL), transferring (2 mg/mL) and IgG (6 mg/mL) to generate standard insulin solutions. Five microliter of insulin solution was dropped on the anti-insulin modified ITO slide and incubated on a shaking table at room temperature for 30 min. Then the ITO plate was rinsed with Tween 20 solution (0.05% Tween 20 in water) and distilled water sequentially and dried under N 2 stream. After that, 1 μL of DHB (15 mg/mL, 50% ACN, 0.1% TFA) was applied as the matrix for MALDI-TOF mass analysis. The solvent was dried naturally under room temperature and then the target plate was subjected to MALDI MS analysis.
Mass spectrometry was acquired on SHIMADZU AXIMA Resonance MALDI-IT-TOF on reflective/positive ion mode. Laser power of 105 mV was selected as standard desorption energy in MS analysis.
Strategy to anchor insulin antibody on MALDI MS target plate
An indium tin oxide coated glass slide (ITO slide) was selected as MALDI MS target plate due to its ease of chemical modification and excellent conductivity [26], the latter is important for the efficient MALDI MS analysis [25]. An easy approach to construct nano-surface was designed, and used to immobilize the insulin antibody on the ITO slide by chemical conjugation.
The approach (Fig. 1) utilized gold-nanoparticle (GNP) to modify ITO surface. Three key steps were needed: derivatization of amino group on ITO slide with silanization reagent APTMS [27]; deposition of GNP on the amino-terminated ITO slides [26]; derivatization of carboxyl group on the GNP-ITO and chemical coupling of antibody. The GNP with negative charged surface could be adsorbed stably on the amino-terminated ITO surface through electrostatic interactions [26]. Compared with the 2-dimension planar surface of original ITO, the 3-dimension nano-structure would greatly increase the surface area of target plate. Correspondingly, the antibodies immobilized on plate surface increased, which was beneficial for capturing more insulin and increasing MS sensitivity [6]. A key factor to affect detection was the laser energy of MALDI MS, because weak laser power may not desorb the insulin immobilized by antibody. As shown in Fig. 2, when the laser energy was lower than 65 mV, the signal was hardly detected. The MS signal intensity increased gradually as laser energy rose from 65 to 105 mV, and kept stable after laser energy exceeded 105 mV (Fig. 2). Because excessive laser energy may introduce additional noisy signal, we chose 105 mV as standard laser energy in our experiments to achieve the best signal intensity.
To prove the validity of this approach, either insulin antibody or a control protein BSA were covalently anchored on the GNP-ITO slides. Subsequently, standard solutions of insulin were incubated on these two protein-modified plates, and then washed to remove any unbound or weakly bound species. As shown in Fig. 3a, on the BSA modified GNP-ITO (without antibody), no insulin signal could be detected (Fig. 3a). Meanwhile, by using the antibody-modified GNP-ITO as target plate, clear signal of insulin was identified (Fig. 3b), indicating that the insulin in samples was captured on the antibodyfunctionalized plate and then detected by MALDI MS. Notably, there is no other noisy signals visible in the mass range, indicating that the designed target plate did not introduce interfering signals in this range.
To exclude the possibility of capturing non-target molecules on the antibody-modified GNP-ITO, a mixture solution of insulin (34 nM) and non-target peptide (331 nM) was analyzed by using different target plates. As shown in Fig. 4, no recognizable peak can be observed on the BSA modified GNP-ITO (Fig. 4a). In addition, the conventional MALDI MS plate demonstrated abundance of both two peptides (Fig. 4b). As no separation method employed, the insulin signal was much lower than C-peptide (Fig. 4b). In contrast, on the antibody-modified target plate, the signal of C-peptide disappeared and only the clear peak of insulin was observed in spectrum (Fig. 4c). These evidences revealed that the antibody have been immobilized substantially on the target plate and specifically capture the target insulin in samples as we expected.
Sensitivity of the MALDI MS based on the anti-insulin modified GNP-ITO
Standard insulin samples at different concentrations were tested on the antibody-modified GNP-ITO. Five microliters of insulin solution were incubated on a circular area (diameter 4 mm) of the target plate for 30 min followed by washing with Tween 20 solution (0.05%) and water. After drying, DHB (15 mg/mL in 50% ACN, 0.1% TFA) was spotted on the circular area as MS matrix and the target plate was subjected to MALDI MS analysis. As illustrated in Fig. 5, strong signals of insulin at m/z 5809 and 5791 were clearly detected. The insulin peaks could be resolved from the background (≥3 times of baseline intensity) even when the insulin level was as low as 0.8 nM. The limit of detection (LOD) at 0.8 nM obtained in standard solution is much lower than the previous reported similar works, in which the tested peptide levels were usually at the range of micromole [25] or several hundred nano-moles [24]. A good linear correlation (R 2 = 0.994) of MS intensity with insulin level was obtained in the range of 0.8-48 nM (Fig. 6). We also tried to test the performance of our method in complex biofluids. Due to the difficulty to obtain insulin-free serum, we tried to mimic the serum by generating a buffer solution containing serum-abundant proteins such as albumin (35 mg/mL), transferring (2 mg/mL) and IgG (6 mg/ mL), which were added at the concentrations similar to real serum. As shown in Fig. 6, although the MS intensity of insulin based on artificial matrix was lower than PBS buffer, a linear relation with R 2 = 0.985 still could be observed in the range lower than 32 nM (Fig. 6). These results demonstrated the ability of our method to perform real bio-fluid analysis, and that the interference from sample matrix was limited.
A serial of parallel serum samples collected from 6 patients accepting insulin therapy were also tested to investigate the reproducibility. Good repeatability was obtained in within and between day tests, as shown in Table 1. In short, sensitive detection of insulin was easily
Analysis of biological fluids
It is difficult to directly detect low-level targets in complex samples using MALDI MS. The complicated composition and high level non-target molecules in biological fluids generally generated high background signals and interfered with the detection of target molecules. In addition, salts present in samples will greatly reduce the performance of MS analysis. Figure 7a indicated the MS spectrum by applying raw human serum (from the patients accepting insulin therapy) on the conventional MALDI plate, and no identified signals can be observed at m/z 5809 and 5791. Subsequently, the human sera were incubated and washed on the antibody-modified GNP-ITO and then subjected to MALDI MS analysis. As shown in Fig. 7b, the signals of insulin at m/z 5809 and 5791 were discerned clearly from the background, representing a significant improvement compared with conventional MALDI MS methods. According to the MS intensity, the concentrations of insulin in these serum samples were calculated as shown in Table 2. Compared to the insulin level obtained by clinically standard chemiluminescence immunoassays (CIA), the immune-MS results did not show significant difference, with a p value >0.05 (p = 0.3383, Table 2; Fig. 8). Meanwhile, strong correlation of the results tested by CIA with immune-MALDI MS was also observed (correlation coefficient >90%).
In addition, we measured the lysate fluid of rat pancreas cell using this developed target plate. The amino acid sequences of rat insulin ([M + H] + =5805 Da) and human insulin ([M + H] + =5809 Da) although differ by several amino acids, the rat insulin could still be recognized by the anti-insulin we applied. The MS spectrum (Fig. 9) revealed that the rat insulin in lysate fluid also could be directly detected, with a concentration calculated of around 14 nM. These results further exhibited the feasibility of this method for bio-fluids analysis.
Conclusion
Here, we proposed a strategy to construct antibodyfunctionalized MALDI target plate for fast and highthroughput MS analysis of insulin. The nanostructure on the surface of target plate improved the efficiency of antibody coupling and insulin detection. Biological fluids, such as serum or cell lysate, could be sensitively analyzed using this functional target plate without any pre-purification, which is impossible for the conventional MALDI MS methods. Furthermore, our method is also useful for
|
2017-08-03T00:34:20.506Z
|
2017-01-19T00:00:00.000
|
{
"year": 2017,
"sha1": "17e6c9a485e7b4b1cbcf46233e1e5c2827b1f54f",
"oa_license": "CCBY",
"oa_url": "https://clinicalproteomicsjournal.biomedcentral.com/track/pdf/10.1186/s12014-017-9139-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "afdb3d095eeb23113de71f9c069017099d998cb6",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
3562150
|
pes2o/s2orc
|
v3-fos-license
|
The relationship of waist circumference and body mass index to grey matter volume in community dwelling adults with mild obesity
Summary Objective Previous work has shown that high body mass index (BMI) is associated with low grey matter volume. However, evidence on the relationship between waist circumference (WC) and brain volume is relatively scarce. Moreover, the influence of mild obesity (as indexed by WC and BMI) on brain volume remains unclear. This study explored the relationships between WC and BMI and grey matter volume in a large sample of Japanese adults. Methods The participants were 792 community‐dwelling adults (523 men and 269 women). Brain magnetic resonance images were collected, and the correlation between WC or BMI and global grey matter volume were analysed. The relationships between WC or BMI and regional grey matter volume were also investigated using voxel‐based morphometry. Results Global grey matter volume was not correlated with WC or BMI. Voxel‐based morphometry analysis revealed significant negative correlations between both WC and BMI and regional grey matter volume. The areas correlated with each index were more widespread in men than in women. In women, the total area of the regions significantly correlated with WC was slightly greater than that of the regions significantly correlated with BMI. Conclusions Results show that both WC and BMI were inversely related to regional grey matter volume, even in Japanese adults with somewhat mild obesity. Especially in populations with less obesity, such as the female participants in current study, WC may be more sensitive than BMI as a marker of grey matter volume differences associated with obesity.
Introduction
Obesity is a risk factor for neurodegenerative diseases, such as Alzheimer's disease (1)(2)(3), as well as for hypertension (4,5), coronary heart disease (6) and metabolic diseases (7,8). Body mass index (BMI), which has conventionally been used to measure excess body fat, is associated with cerebral atrophy of the temporal region, as evaluated by visual ratings of computed tomography scans (9). Several magnetic resonance imaging studies of healthy participants have revealed specific brain regions in which reduced volume is associated with BMI (10)(11)(12)(13). These studies indicate that alterations in specific brain structures associated with obesity may precede clinically significant neurological changes in the brain.
Although widely used, the appropriateness of BMI as a universal indicator of body fatness for all populations has been questioned (7,14). BMI does not accurately measure fat content, nor does it reflect the proportions of muscle and fat, or account for sex and racial differences in fat content and distribution of intraabdominal (visceral) and subcutaneous fat (7,15). The risk of high mortality with normal-weight central obesity may be overlooked when only the BMI is used (14). Among the US population, the mortality of metabolically unhealthy people with a normal BMI is higher than that of metabolically healthy people classified as obese by the BMI (7). In Asia in particular, a great concern is that World Health Organization (WHO)-defined BMI cut-offs (16) may underestimate the risk from obesity because Asians tend to have increased body fat at normal BMI values (7,17). Shiwaku et al. reported that Japanese with BMIs in the range of 23.0-24.9 are at increased risk for obesity-associated disorders, even though these values are classified as normal according to WHO criteria (17).
Recently, waist circumference (WC), which estimates abdominal fat more directly than does BMI, has been argued to be a better indicator than BMI because WC is more closely correlated with the secondary adverse effects of obesity (18)(19)(20). Several previous studies have investigated the relationship between WC and brain structure (10,21,22). Kurth et al. demonstrated negative correlations between WC and grey matter volume and between BMI and grey matter volume in several brain regions, including the hypothalamus; the prefrontal, anterior temporal and inferior parietal cortices; and the cerebellum, with women showing more widespread correlations for WC than for BMI (10). Participants in previous studies were Caucasian, although this was not explicitly stated in the study by Debette et al. (21). Only the study by Kurth et al. (10) showed the actual number of participants with obesity (10% [11/115] with a BMI ≥ 30). Indeed, the incidence of obesity, as defined by the WHO criteria of BMI of 30 or greater, is estimated to be 10-20% in Europe and the USA, in contrast to 2-3% in Japan (23). Therefore, evidence on the relationship between WC and brain structure in populations with less obesity, such as the Japanese, remains sparse.
This study investigated correlations between brain volumes and obesity in a large Japanese sample, using both WC and BMI. These correlations for the global grey matter volume and for specific grey matter regions were calculated using voxel-based morphometry (VBM). Based on the previous studies mentioned previously, the hypothesis was examined that obesity-related differences in grey matter volume will be seen among individuals of a population with less obesity and that WC may have advantages over BMI in this population. This study also focused on sex-associated differences because sex is a key demographic factor that influences eating behaviour and body-weight regulation (24).
Participants
The participants were volunteers who had undergone private health screening at the University of Tokyo Hospital between 2008 and 2009. The body weight, height and score on the Mini-Mental State Examination (MMSE) were measured as part of the health screening visit. WC was measured at the umbilicus level, according to the Japanese definition (25). The BMI was calculated as weight divided by height squared (kg m À2 ). In addition, blood pressure and blood samples, including blood sugar as well as serum levels of lipids, insulin and adiponectin, were evaluated. Patients were defined as having metabolic syndrome when they met the criteria for Japanese metabolic syndrome (25)central obesity (a WC of 85 cm or more for men and 90 cm or more for women) and any two of the following three risk factors: serum triglycerides ≥150 mg dL À1 , serum high-density lipoprotein cholesterol <40 mg dL À1 or both; systolic blood pressure ≥130 mmHg, diastolic blood pressure ≥85 mmHg or both; and fasting blood glucose levels ≥110 mg dL À1 . The homeostasis model assessment of insulin resistance (HOMA-IR) index was given as fasting insulin (μIU mL À1 ) × fasting glucose (mmol L À1 ), and the level of insulin resistance was defined as HOMA-IR > 2.5 (26). Although the existence of metabolic syndrome and that of insulin resistance was not considered exclusion criteria here, to include individuals with overweight and obesity in the study, these criteria were used later to divide participants into subgroups to check the potential influence of these parameters on global grey matter volume.
Participants who had a history of neuropsychiatric disorder or central nervous system disease were excluded. Two trained neuroradiologists reviewed all scans (including T2-weighted and fluid-attenuated inversion recovery images), and participants who had old infarcts, haemorrhages or aneurysms were excluded. The inclusion criterion for Fazekas et al. visual scale score to assess white matter on magnetic resonance imaging (range, 0 to 3) (27) was restricted to between 0 (absence) and 2 (smooth 'halo'). The institutional ethics committee approved the study. This study complies with the principles of the Declaration of Helsinki. Written informed consent was obtained from each participant after providing a complete explanation of the study. Furthermore, to protect subject confidentiality, patient information was stripped from all data.
Image acquisition
Magnetic resonance imaging data were obtained on two 3T Signa HDx scanners (GE Medical Systems, Milwaukee, Wisconsin, USA) of the exact same model with an 8-channel brain phased-array coil. For the VBM analysis, T1-weighted images were acquired in 124 slices by using three-dimensional spoiled-gradient recalled acquisition in the steady state (repetition time, 6.4 ms; echo time, 2.0 ms; flip angle, 151; field of view, 250 mm; slice thickness, 1 mm with no gap; acquisition matrix, 256 × 256; number of excitations, 0.5). The voxel dimensions were 0.977 × 0.977 × 1.0 mm.
A 'nonlinear only' modulation was performed on all images during spatial normalization to express the values in the resultant images as volumes corrected for brain size. The resultant modulated images were smoothed by using a Gaussian kernel of 8 mm (full width at half maximum). In addition, SPM8 default modulation was performed to calculate the total intracranial volume (TIV) as the sum of grey matter, white matter and cerebrospinal fluid volumes. When analysing global grey matter volume, the grey matter fraction (GMF) was defined as the proportion of the TIV occupied by the grey matter volume, to normalize the head size of each subject.
Statistical analysis
Pearson product moment correlations between GMF and WC, BMI, age, MMSE, as well as TIV were calculated separately for each sex to investigate the relationships between global grey matter volume and the other variables. The significance level was set at P < 0.05.
Voxel-wise analyses were performed to investigate the correlation between WC or BMI and the regional grey matter volume. Multiple regression was performed in SPM8 separately for each sex. The WC or BMI was treated as a covariate-of-interest. As nuisance variables, individual values for age and MMSE were included in the analysis for each sex. Two linear contrasts (1, À1) were made for positive and negative correlations, respectively. The significance level was set at the family-wise error-corrected P value of less than 0.05.
Results
Characteristics of the study population Data from 792 participants (523 men and 269 women) were included in the analyses (Table 1). There were no sex differences in age or MMSE. The WC and BMI were significantly different between men and women, with both being greater in men than in women (P < 0.0001 for both; Wilcoxon rank-sum test). The TIV was also larger in men than in women (P < 0.0001; Student's t-test). In contrast, the GMF was larger in women than in men (P < 0.0001; Wilcoxon rank-sum test). There was no sex difference in the prevalence of people who qualified as obese with a BMI ≥ 30. However, the prevalence of participants with each of the following conditions was significantly lower in women than in men: those who (i) qualified as overweight with a BMI between 25 and 30, (ii) met the criteria for metabolic syndrome, (iii) met the criteria for insulin resistance or (iv) had a treatment history of lifestyle-related diseases (all Ps < 0.01; χ 2 test). Table 2 shows Pearson's correlation coefficients between variables for each sex. Neither WC nor BMI was significantly correlated with GMF. This was also the case when the partial correlation coefficients between WC or BMI and GMF adjusted for age, MMSE and TIV were analysed. Age-related increases in WC and BMI were seen only in female participants. The partial correlation coefficient adjusted for MMSE and TIV was also significant between WC and age (r = 0.28, P < 0.01) and between BMI and age (r = 0.16, P < 0.01) in female participants. GMF and TIV were negatively correlated with adiponectin in male participants only, although the correlation became non-significant when adjusted for age (r = À0.08, P = 0.09; r = À0.04, P = 0.34). The participants were further divided into subgroups with or without metabolic syndrome or insulin resistance to check whether GMF was related to WC or BMI when analysed separately for each subgroup. However, none of the correlations reached statistical significance.
Voxel-wise analysis with voxel-based morphometry
In male participants, widespread regions in which grey matter was negatively correlated with WC or BMI were (Table 3 and Figure 1, blue). For female participants, significant WC-related or BMI-related decreases in regional grey matter volume were also observed, although the areas were much smaller than in male participants (Table 4 and Figure 1, red). The sum of the significant cluster sizes (number of voxels) for WC was close to that for BMI, although the former was slightly larger than the latter in female participants (WC = 2,358 and BMI = 2,136) and smaller than the latter in male participants (WC = 47,383 and BMI = 53,544).
For both male and female participants, neither the correlation between regional grey matter volume and WC nor that between regional grey matter volume and BMI was positive.
Discussion
The relationships of grey matter volume to WC and BMI in a large number of Japanese adults were evaluated. As for global grey matter volume, neither the relationship between GMF and WC nor that between GMF and BMI was significant. However, both WC and BMI were negatively correlated with regional grey matter volume in several structures. These regions were more widespread in men than in women.
As hypothesized, the participants in this study were classified as less obese than in previous studies: compared with the mean BMI reported in previous studies (25.02 ± 4.13 (10), 28 ± 5 (21) and 27.4 ± 4.5 or 27.2 ± 4.4 (22)), the mean BMI in this study was relatively low (24.7 ± 3.1 or 22.0 ± 3.3 for men or women, respectively, shown in Table 1) and classified as normal by WHO criteria (16). The percentage of participants with obesity with BMI of at least 30 was 4% for men and 3% for women, which are lower than the general prevalence of obesity in Europe and the USA (10-20%) (23). This may be one of the reasons why an association between global grey matter volume and WC or BMI was not seen in the present study. The participants in the report by Taki et al. (12) were Japanese, and their BMI was as low as that in this study (23.41 ± 3.00 for men and 22.23 ± 2.97 for women): they reported negative correlations between BMI and global grey matter volume in male but not female subjects among 1,428 participants. Their larger number of participants may account for this difference. Another explanation could be the use of self-report data instead of measured data. As they have discussed, Taki et al. obtained data on height and weight by self-questionnaire, and their study population might contain more people with overweight or obesity because subjects with higher BMIs significantly underestimated their weights, compared with those with smaller BMIs (12). Regarding this reporting bias, data of this study obtained by actually measuring height and weight may have estimated more precisely the correlations between BMI and global grey matter volume. The results of this study suggest that mild obesity in individuals with slightly high values of WC and BMI may influence regional grey matter volumes, even when these influences were not reflected in global grey matter volume. The candidate mechanisms for obesity-related differences in brain volume are excess body fat-induced vascular abnormalities (28) and metabolic disorders (such as diabetes mellitus) (19,20), both of which cause brain ischaemia, which in turn lead to brain atrophy. Together with previous studies (10,12,22), the present study suggests that several brain regions are affected by obesity: the bilateral frontal cortex, temporal cortex, inferior parietal cortex and medial occipital cortex, as well as bilateral cerebellar and midbrain thalamic regions. In female participants, the regions were more restricted to the frontal and left thalamic regions. Importantly, the regions that were affected in both male and female participants may be involved in obesity. Several investigators have proposed that the frontal regions are key to the regulation of taste, reward and behavioural control (11,29). The thalamic region is one of the areas thought to be involved in motivational processes, along with the anterior cingulate cortex, caudate nucleus, putamen, hippocampus, hypothalamus, insula and medial prefrontal cortex (30).
Obesity Science & Practice
Waist circumference and BMI were highly correlated, and the regional grey matter areas associated with each overlapped. However, in female participants, the total area of the regions correlated with WC was slightly greater than that of regions correlated with BMI. Some studies have reported that WC is a more sensitive than BMI as an indicator of differences in brain structure (10,31). As in this study, Kurth et al. (10) found that the area of grey matter reduction associated with increases in WC was greater than that associated with increases in BMI when examined in female participants separately. Consistent with previous work, findings of this study suggest that WC is an effective index of risk for neurodegenerative disorders related to obesity, especially in populations with less obesity, such as the female participants of this study.
A sex-associated difference exists in body fat deposition (32,33). Indeed, sex-dependent influences of obesity on brain volume have been reported in several studies (10,12,34). The present study suggests that sex influences the pattern of structure in the brain associated with body fat and that men are more susceptible to obesityrelated differences in brain volume. Both WC and BMI in the current study were larger in men than in women. Therefore, female participants with WC or BMI levels comparable with those in men may show patterns of regional grey matter volume difference similar to those in men: a future study in a more homogeneous group is needed to examine this possibility. In summary, this finding of low regional grey matter volume associated with high WC and BMI in community-dwelling adults suggests that even mild obesity can affect regional brain structures. Although future studies will be needed to confirm whether the relationship between mild obesity and brain volume is quantitative or qualitative (i.e. ethnicity-specific), this finding suggests that interventions to people with mild obesity should be offered because, like people with obesity, they too may potentially be at risk for future declines in brain function.
|
2018-04-03T05:24:16.963Z
|
2017-12-29T00:00:00.000
|
{
"year": 2017,
"sha1": "5b1d7ce715071c0f600ed73395f459fb15758ed6",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/osp4.145",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b1d7ce715071c0f600ed73395f459fb15758ed6",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211523828
|
pes2o/s2orc
|
v3-fos-license
|
TLR2 Signaling Pathway Combats Streptococcus uberis Infection by Inducing Mitochondrial Reactive Oxygen Species Production
Mastitis caused by Streptococcus uberis (S. uberis) is a common and difficult-to-cure clinical disease in dairy cows. In this study, the role of Toll-like receptors (TLRs) and TLR-mediated signaling pathways in mastitis caused by S. uberis was investigated using mouse models and mammary epithelial cells (MECs). We used S. uberis to infect mammary glands of wild type, TLR2−/− and TLR4−/− mice and quantified the adaptor molecules in TLR signaling pathways, proinflammatory cytokines, tissue damage, and bacterial count. When compared with TLR4 deficiency, TLR2 deficiency induced more severe pathological changes through myeloid differentiation primary response 88 (MyD88)-mediated signaling pathways during S. uberis infection. In MECs, TLR2 detected S. uberis infection and induced mitochondrial reactive oxygen species (mROS) to assist host in controlling the secretion of inflammatory factors and the elimination of intracellular S. uberis. Our results demonstrated that TLR2-mediated mROS has a significant effect on S. uberis-induced host defense responses in mammary glands as well as in MECs.
Introduction
Mastitis is an inflammation caused by intra-mammary infection, leading to losses in the dairy industry [1]. Streptococcus uberis (S. uberis) is an environmental pathogen emerging as the most important mastitis-causing agent in some regions [2]. To make matters worse, there is growing evidence that it can infect people and pose a potential threat to human health [3]. Previous studies in our laboratory have demonstrated that persistent inflammation, including swelling, secretory epithelial cell degeneration, and polymorphonuclear neutrophilic leukocyte (PMN) infiltration, occurs in mammary tissue following injection with S. uberis [4]. Additionally, this inflammatory responce caused by S. uberis is lighter than that of caused by E. coli [4]. Additionally, these pathological responses are connected with S. uberis intracellular infection as it escapes the elimination of immune cells and induces persistent infection.
Activating pattern recognition receptors (PRRs) to produce natural inflammatory immune responses is important to control the intracellular infection induced by bacteria like S. uberis [5,6]. Toll-like receptors (TLR) family plays a critical role in these processes. Once activated by microbes, the MyD88-dependent pathway triggers producing inflammatory cytokines through nuclear factor (NF)-κB and mitogen-activated protein kinases. It is also possible through a TIR-domain-containing adapter through TRIF-dependent pathway to cause inflammation and this pathway is associated with inducing IFNs and stimulating T cell responses [7]. Previous research has found that PRRs are not only expressed by immune cells, but also by conventional non-immune cells, such as endothelial and epithelial cells, which also contribute to immune regulation [8].
Strandberg et al. first demonstrated that TLRs and their downstream molecules are expressed on bovine mammary epithelial cells (MECs) [8]. Ibeagha-Awemu et al. further revealed that the expressions of TLR4, MyD88, NF-κB, TIR domain-containing adapter molecule 2 (TICAM2), and IFN-regulatory factor 3 increase in bovine MECs were challenged by lipopolysaccharides [9]. These studies indicate that MECs could have a pivotal role in host defense with TLRs as their huge number in the mammary gland. Our laboratory has done a lot of research on the function of TLRs and MECs against S. uberis infection in vivo and in vitro models [4,[10][11][12]. We find that TLRs, mainly TLR2 but not excluding TLR4, initiates a complex signaling network characterized by NF-κB and nuclear factor in activated T cells. In addition, it activates the secretion of cytokines and chemokines accompanied with its self-regulation pathways in response to S. uberis challenge [4].
Reactive oxygen species (ROS) is a kind of free radical including oxygen atoms, hydrogen peroxide (H 2 O 2 ), superoxide anion (O 2− ), and hydroxyl radical (OH − ) [13]. They are produced intracellularly through multiple mechanisms depending on the types of cell and tissue. However, the major ROS sources in mammalian cells are NADPH oxidase-induced ROS and mitochondrial-derived ROS (mROS) [12]. In most tissues, mROS from the respiratory chain is essential [14] because in innate immunity, mitochondria primarily fight bacterial infections through mROS, and this is evidenced by the fact that mROS modulates multiple signaling pathways including NF-B, C-Jun N-terminal kinase, and the caspase-1 inflammasomes [15]. Previous studies have shown that restricting pathogen-induced mROS impairs NF-κB activation, suggesting that mROS positively controls the NF-κB signaling pathway [16,17]. In addition, the production of mROS in immune cells like macrophages involves recruiting tumor necrosis factor (TNF) and receptor-associated factor (TRAF)6 to mitochondria and they also act as adaptors of the TLR signaling pathway [18]. Recently, MECs, the main cells for lactation in mammary tissue, have also been found to play a non-negligible role in the regulation of infection. Pathogens invading mammary tissue and epithelial cells can stimulate MECs to produce proinflammatory cytokines, anti-inflammatory factors, and chemokines such as TNF-α, interleukin (IL)-1β, IL-4, IL-6, IL-8, and IL-10 [19]. It is possible that MECs are involved in the generation of ROS in infections. However, few studies have investigated the interaction between TLRs and mROS against S. uberis infection in vivo and in vitro. Therefore, we defined whether TLR-induced mROS plays an important role against S. uberis infection in host and MECs.
Bacterial Strain, Cell Culture, and Treatment
S. uberis 0140J (American Type Culture Collection, Manassas, VA, USA) was inoculated into Todd-Hewitt broth (THB) supplemented with 2% fetal bovine serum (FBS; Gibco, New York, NY, USA) at 37 • C in an orbital shaker to mid-log phase (OD 600 0.4-0.6). MECs (American Type Culture Collection, Rockefeller, MD, USA) were incubated in Dulbecco's modified Eagle's medium (DMEM) with 10% FBS and plated at 80% confluence in 6-well cell culture cluster. After culture in serum-free DMEM for 4 h, the monolayer was treated with 40 nM NG25 (inhibitor of TGFβ-activated kinase 1; TAK1: Invitrogen, Carlsbad, CA, USA) for 24 h; 4 µm MK2206 (inhibitor of NADPHase: SellecK Chemicals, Houston, TX, USA) for 24 h; or transfected with 50 nM siTLR2 or/and siTLR4 for 72 h. SiECSIT with 20 nM were performed for 48 h using Lipofectamine 3000 reagent (Invitrogen). Transfection reagents and siRNA (siTLR2, siTLR4, siECSIT) were purchased from Guangzhou Ruibo Biotechnology Co., Ltd., Guangzhou, Guangdong, China. The sequences of siRNA were designed and listed as follows. siTLR2: GTCCAGCAGAATCAATACA; siTLR4: CAATCTGACGAACCTAGTA; siECSIT: GGTTCACCCGATTCAAGAA. Interference of TLR2, TLR4 ( Figure S2) and ECSIT ( Figure S4) gene identified by western blotting, which can be used to induce mastitis in cell models. The treated cells were infected with S. uberis at a multiplicity of infection (MOI) of 10 for 2 or 3 h at 37 • C. The supernatant and cells were collected separately and stored at −80 • C until use.
Mice and Treatment
Specific pathogen-free (SPF) clean-grade mice, including wild-type C57BL/6 (WT-B6), wild-type C57BL/10 (WT-B10), TLR2 −/− (C57BL/6), and TLR4 −/− (C57BL/10), aged 6-8 weeks (20 in total, each group includes two never-pregnant females, and three males) were purchased from Nanjing Biomedical Research Institute of Nanjing University (Nanjing, China) and bred under specific pathogen-free conditions in the Nanjing Agricultural University Laboratory Animal Center. Detailed description about the source for the TLR2 −/− and TLR4 −/− mice can be found on the website https://www.jax.org/ strain/004650 and https://www.jax.org/strain/007227 respectively. These healthy pregnant mice were housed in individual cages and provided water and food ad libitum. All experimental protocols were approved by the Regional Animal Ethics Committee and were in compliance with animal welfare act regulations as well as the guide for the care and use of laboratory animals. Our protocol number approved by the Animal welfare committee is N1418044. After 1 week of adaptive feeding, the female and male mice were kept in cages in a ratio of 2:1, so that they were mated and conceived.
After parturition for 72h, all experimental groups of female mice were infused with 100 colony-forming units (CFU) S. uberis in volume of 50 µL into the left 4th (L4) and right 4th (R4) teats. The offspring were weaned 2 h prior to experimental infusion. Following administration of ether anesthesia, the L4 and R4 teats were moistened with 75% ethanol, a 33-gauge needle fitted to a 1 mL syringe was gently inserted into the mammary duct, and 50 µL of S. uberis was slowly infused. At 24 h post S. uberis-infusion (PI), all mice were euthanized, and the mammary gland was aseptically collected and stored at −80 • C until analyzed.
The mammary gland was fixed in 10% neutral buffered formalin. Sections of 5 µm thickness were stained with hematoxylin and eosin. Mammary gland tissues were weighed and homogenized with sterile phosphate buffered saline (PBS) (1:5, w/v) on ice. After centrifuged at 500× g at 4 • C for 40 min, the supernatant was centrifuged again. The second supernatant was collected and stored at −80 • C until assayed.
Histological Observation and Immunohistochemistry
The mammary tissue fixed in 10% neutral buffered formalin was trimmed and flushed in water for at least for 4 h, and then dehydrated in alcohol solutions ranging from 75% to 100%, with 5% increase at 1 h intervals. After soaking in xylene, the tissues were embedded in wax for 3 h at 60 • C. Slices (5 µm thick) were cut and stained with hematoxylin and eosin. The histological changes, including PMN infiltration, bleeding and degeneration, and adipose tissue loss, were analyzed by light microscopy (BH2; Olympus, Tokyo, Japan) at a magnification of 40×. Specifically, Leukocyte infiltration, mainly lymphocytes and PMN, was categorized in four tissue areas: (1) teat cistern lining; (2) gland cistern lining; (3) gland cistern parenchyma; and (4) deep parenchyma. Prevalence of these cells was estimated for each tissue section at 250× and assigned a score of 1, 2, or 3 where 1: none to few leukocytes present; 2: moderate leukocyte infiltration; and 3: marked leukocyte infiltration. Results were presented as average leukocyte infiltration score for each section of tissue characterized. Bleeding and degeneration in tissue samples were characterized using a score where 1: none to few bleeding and degeneration; 2: moderate bleeding and degeneration; and 3: marked bleeding and degeneration. Results are expressed as the mean bleeding and degeneration of each tissue section characterized.
Area occupied by adipose tissue in secretory parenchyma tissue samples was estimated using a score where 1: less than 20% adipose tissue; 2: 20% to 50% adipose tissue; and 3: more than 50% adipose tissue. Alveolar lumen characterization and adipose tissue estimation were expressed as frequency percentages of each assigned score. The histological changes, including PMN infiltration, bleeding and degeneration, and adipose tissue loss, were observed by light microscopy (BH2; Olympus, Tokyo, Japan) at a magnification of 40× and then were analyzed by double bland method [20].
Immunohistochemical staining was performed as follows. Tissue sections were washed with PBS, then covered with 3% H 2 O 2 for 15 min at 37 • C to inhibit further endogenous peroxidase activity. Tissue slices were blocked with 5% bovine serum albumin and incubated with antibodies against MyD88, TRAF6, ECSIT, and TRIF (Cell Signaling Technology, Danvers, MA, USA), at 4 • C in a humidified chamber. Overnight, biotinylated anti-rabbit IgG (Boster Bio-Technology, Wuhan, China) was incubated for 30 min at 37 • C. After rehydration, the sections were incubated with avidin-biotin peroxidase complex for 40 min at 37 • C. Finally, the sections were washed and bound conjugates were revealed by diaminobenzidine staining (Boster Bio-Technology). The sections were imaged with microscope and characterized quantitatively by using the Image Pro-Plus 5.0 image-analysis software (Media Cybernetics, Silver Spring, MD, USA). Integrated optical density (IOD) indicates the total amount of staining material in each section.
RNA Extraction and Quantitative Real-Time Polymerase Chain Reaction (qPCR)
PCR was carried out as previously described [10]. Total RNA was extracted by TRIzol reagent (TaKaRa, Dalian, China). Corresponding cDNA was obtained using reverse transcriptase and Oligo (dT) 18 primer (TaKaRa). An aliquot of the cDNA was mixed with 25 µL SYBR ® Green PCR Master Mix (TaKaRa) and 10 pmol of each specific forward and reverse primer. All mixed systems were analyzed in an ABI Prism 7300 Sequence Detection System (Applied Biosystems, Waltham, MA, USA). Fold changes were calculated as 2 −∆∆Ct . All primer sequences (Table S1) were synthesized by Invitrogen Biological Company (Shanghai, China).
Total Protein Extraction and Western Blotting
Intracellular protein levels were determined by Western blotting analysis. GAPDH (Bioworld, USA) was employed to ensure equal loading. Cells were washed twice in ice-cold PBS, lysed with RIPA buffer (Beyotime, Nantong, China) added protease inhibitor PMSF (Beyotime, Nantong, China) by incubating on ice for 30 min in an Eppendorf tube. The supernatants were collected by centrifuging at 5000× g for 10 min at 4 • C protein concentration was determined by bicinchoninic acid assay (BCA) (Bebytime, Nantong, China). Then, 10% gel was used, and 10 µL of protein sample was added per hole. Extracts with equal amounts of proteins were solubilized by SDS sample buffer (BioRad, Califonia, USA), separated by SDS-PAGE, and transferred to polyvinylidene difluoride membranes (Millipore, Bedford, MA, USA). The membranes were blocked with 5% non-fat milk diluted in Tris buffered saline with Tween-20 (TBST) for 2 h at room temperature, and hybridized overnight with primary antibody at 4 • C Primary antibodies were diluted as follows: GAPDH (1:10,000), MyD88 (1:1000), TRAF6 (1:1000), ECSIT (1:1000), TRIF (1:1000). Before and after incubation with the secondary antibodies at room temperature for 2 h, the membranes were washed 3 times with TBST. Secondary antibody is horseradish peroxidase (HRP)-linked anti-rabbit IgG (CST, Massachusetts, USA, 1:10,000). The signals were detected by an ECL western blot analysis system (Tanon, Shanghai, China). Analysis of bands was quantified with Image J software (NIH, Bethesda, MD, USA).
Measurement of Reactive Oxygen Species (ROS) and Mitochondrial Reactive Oxygen Species (mROS)
Intracellular ROS was evaluated by staining MECs with dichloro-dihydrofluorescein diacetate (DCFH-DA) (Beyotime, Nantong, China), a fluorescent ROS-sensitive indicator that freely permeates cell membranes. mROS was assessed by MitoSOX (Thermo, Waltham, MA, USA), a fluorescent mROS-sensitive indicator. Briefly, after incubating with 10µM DCFH-DA for 30 min at 37 • C or 5 µM MitoSOX for 20 min, cells were washed 3 times in phosphate buffered saline (PBS) and detached. The cells were centrifuged at 400× g for 5 min, resuspended in PBS, and immediately analyzed by flow cytometry using FACSCanto (BD, Franklin, NJ, USA). Ten thousand cells per sample were analyzed using CellQuest Pro acquisition and Flow Jo software [21].
2.7. Assay of TNF-α, IL-1β, and IL-6 by ELISA The levels of TNF-α, IL-1β, and IL-6 in mammary glands and MECs were measured by ELISA (Rigor Bioscience, Beijing, China). Prepared standards (50 µL), and antibodies (40 µL) labeled with enzyme (10 µL) were reacted for 60 min at 37 • C and the plate was washed five times. Chromogen solutions A (50 µL) and B (50 µL) were added and incubated for 10 min at 37 • C. Stop solution (50 µL) was added and optical density value was measured at 450 nm within 10 min. Qualitative differences or similarities between the control and experimental groups were consistent throughout the study. The activities or levels of NAGase, total antioxidant capacity (T-AOC), superoxide dismutase (SOD), malondialdehyde (MDA), and uncoupling protein 2 (UCP2) were determined using commercial kits purchased from Nanjing Jiancheng Bioengineering Institute (China).
Viable Bacterial Count Assay
Viable bacteria were enumerated as colony-forming units (CFU) on THB agar. The mammary glands were aseptically homogenized with sterile PBS (1:5, w/v). The supernatants were spread on plates. CFUs were counted by the spread plate method after incubation for 12 h at 37 • C.
MECs and MECs with siECSIT were incubated in DMEM with 10% FBS and plated at 80% confluence in 6-well plates. After culture in serum-free DMEM for 4 h, at mid-exponential phase (OD 600 0.4-0.6), S. uberis-infected cells were washed 3 times with PBS containing 100 mg/mL gentamicin, followed by gentamicin-free PBS. Cells were pelleted at 1.4× g for 10 min. The same number of cells were lysed with sterile triple distilled water, and CFUs were counted by the spread plate method after incubation for 12 h at 37 • C [22].
Statistical Analysis
Results were analyzed using GraphPad Prism 5.0 software (GraphPad Software Inc., La Jolla, CA, USA). Data were expressed as means standard error of the mean (SEM). Differences were evaluated by one-way analysis of variance followed by post-hoc tests. Significant differences were considered at p < 0.05.
TLR2 Mediates Tissue Damage and Anti-S. uberis Infection in Mammary Glands
S.uberis belongs to gram-positive bacteria which is mainly recognized by TLR2. However, previous research has demonstrated that the role of TLR4 could not be ignored in S. uberis infection due to the close relationship, similar structure, and function between TLR2 and TLR4 [3,23,24]. In this work, we explored the roles of TLR2 and TLR4 in S. uberis infections in TLR2 −/− and TLR4 −/− mice to further understand the molecular defense mechanism in S. uberis mastitis. No histological changes were observed in WT-B6 or WT-B10 mammary glands of control mice, whereas, there was some suspicion of tissue damage in TLR2 −/− and TLR4 −/− control mice ( Figure 1A). Inflammation and tissue damage appeared in mammary tissue after infection with S. uberis in all challenged groups. These responses were characterized by PMN infiltration, increased bleeding and epithelial cell degeneration, and excess adipose tissue. Compared with WT-B6 mice, TLR2 deficiency induced more severe pathological damage. A higher score was presented for the three indexes mentioned above and there were significant increases in bleeding and degeneration, as well as excess adipose tissue (p < 0.05; Figure 1B). However, TLR4 deficiency caused insignificant inflammation and tissue damage during S. uberis challenge comparing WT-B10 mice ( Figure 1A,C). N-acetyl-β-d-glucosaminidase (NAGase), a marker enzyme of MECs and mammary gland damage, was significantly elevated in TLR2 −/− mice, but not in TLR4 −/− mice when compared with WT mice at 24 h post-challenge (p < 0.05; Figure 1D and Figure S1). Similarly, the number of bacteria in the mammary tissue of TLR2 −/− mice was higher than that of in WT-B6 mice (p < 0.05), but there was no significant difference between TLR4 −/− and WT-B10 mice. We conclude that it is TLR2 that primarily mediated the tissue damage and anti-bacterial effect in mammary glands during S. uberis infection.
TLR2 and TLR4 Deficiencies Affect the Secretion of Cytokines in S.uberis Infection
Previously, it has been reported that the secretion of proinflammatory cytokines in mammary glands can more accurately reflect the level of inflammation [17,25]. Although inflammation contributes to fighting infection, massive release of cytokines can cause irreversible damage to tissues. Here, we investigated the level of TNF-α, IL-1β and IL-6 in response to S. uberis infection in TLR2 −/− and TLR4 −/− mice (Figure 2A,B). S. uberis challenge significantly increased TNF-α level in WT, TLR2 −/− and TLR4 −/− mice (p < 0.05). Compared with corresponding control mice, TNF-α and IL-1β in TLR2 −/− mice and TNF-α in TLR4 −/− mice significantly decreased (p < 0.05). These results indicated that TLR2 and TLR4 deficiencies affected the secretion of cytokines in S.uberis infection. The protein expressions of TNF-α, IL-1β and IL-6 were determined by ELISA in mammary gland of WTB6, WTB10, TLR2 −/− and TLR4 −/− mice with or without S. uberis. A is for the protein expressions for TLR2 −/− group and B is for the TLR4 −/− group. Experiments were repeated three times and all data were presented as the means ± SEM (n = 6). * (p < 0.05) = significantly different between the indicated groups.
MyD88-Dependent Pathway Predominates in S. uberis Infection
After TLR activation, MyD88 dependent and independent pathways are critical to host responses [26]. TLR2 and TLR4 can activate MyD88-dependent signaling pathway to produce cytokines and TLR3 and TLR4 can activate TRIF-dependent signaling, which activates NF-kB and IRF3 resulting in the induction of proinflammatory cytokine genes and type I IFNs [27]. We assessed the expressions of MyD88 and TRIF, respectively, by immunohistochemistry in mammary glands, as they are the key molecules in the MyD88-dependent or independent signaling pathways. There was a significant increase in the expression of MyD88 instead of TRIF in WT, TLR2 −/− , and TLR4 −/− mice after challenged with S. uberis (p < 0.05). However, compared to WT mice, the knockout of TLR2 or TLR4 reduced the expression level of MyD88 during S. uberis infection. (p < 0.05; Figure 3A,B). MECs as the main functional cells in mammary glands, our previous study has established that they play a key role in anti-infection response in mammary glands. Furthermore, we detected that interference of TLR2 and/or TLR4 by specific siRNA significantly decreased MyD88 expression in S. uberis infection as well (p < 0.05; Figure 3C-E). These data suggested that following TLR activation, the MyD88-dependent pathway predominated during S. uberis infection in MECs and in mammary glands. For quantitative analysis, bands were evaluated densitometrically with Image J analyzer software normalized for GAPDH density. Experiments were repeated three times and data were presented as the means ± SEM (n = 3). * (p < 0.05) = significantly different between the indicated groups.
TRAF6 and ECSIT Participate in Signal Sensing from Toll-Like Receptors (TLRs) in S. uberis Infection
We next evaluated the expression levels of TRAF6 and evolutionarily conserved signaling intermediate in Toll pathways (ECSIT) which are downstream targets of the MyD88 signaling pathway, using immunohistochemistry in mammary glands of mice. The TRAF6 adaptors increased dramatically in all mice after S. uberis infection (p < 0.05), although TLR2 or TLR4 deletion weakened the expression of ECSIT compared with WT mice during S. uberis challenge (p < 0.05; Figure 4A,B). Similarly, in MECs, interfering TLR2 or TLR4 significantly reduced the expressions of TRAF6 and ECSIT after S. uberis infection (p < 0.05; Figure 4C-E). Interestingly, there is no significant difference between TLR2 −/− and TLR4 −/− mice in the expressions of TRAF6 and ECSIT during S. uberis infection. These results confirmed that TRAF6 and ECSIT downstream of the MYD88 pathway mediated the anti-S. uberis response in mice and in MECs. For quantitative analysis, bands were evaluated densitometrically with Image J analyzer software normalized for GAPDH density. Experiments were repeated three times and data were presented as the means ± SEM (n = 3). * (p < 0.05) = significantly different between the indicated groups.
TLRs Mediate Redox Status in Mammary Glands During S. uberis Infection
TRAF6 activated by TLRs transfers from cytoplasm to mitochondria, where it engages ECSIT to produce mROS inducing cellular anti-bacterial responses [28]. Since the level of ROS in tissue cannot be detected well, we analyzed total antioxidant capacity (T-AOC), superoxide dismutase (SOD), malondialdehyde (MDA), and uncoupling protein 2 (UCP2) in mammary glands to indirectly reflect the antioxidant levels. The levels of MDA and UCP2 were significantly increased due to the infection of S. uberis in WT, TLR2 −/− , and TLR4 −/− mice (p < 0.05; Figure 5A,B). Although there was no obvious distinction between MDA (p > 0.05), the absence of TLR2 further reduced UCP2 (p < 0.05). The level of T-AOC was significantly lower in all groups after S. uberis challenge compared with control accordingly (p < 0.05). Deletion of TLR4 rather than TLR2 significantly decreased SOD activity after S. uberis infections (p < 0.05). These results indicated that the host's oxidation level did change after S. uberis infection and these changes were related to the TLRs signaling pathway. We aimed to clarify whether MECs were involved in the change in redox status and had a crucial role in S. uberis infection after activating TLR signaling pathway. Therefore, we interfered with the expression of TLR2 and/or TLR4 in MECs and detected ROS, mROS and UCP2 levels. S. uberis infection caused a significant increase in the levels of ROS and mROS ( Figure 5C,D), but interfering with TLR2 significantly reduced ROS and mROS levels (p < 0.05). SiTLR4 also decreased their levels to some extent, but no significant difference was observed (p > 0.05). The expression of UCP2 decreased but interfering with TLR2 reversed this change (p < 0.05). Taken together, these results demonstrated that infection with S. uberis changed the redox status in mammary glands as well as in MECs, and TLR2 played an essential role in this process, especially in MECs.
mROS Plays an Important Role Against S. uberis Infection in Mammary Epithelial Cells (MECs)
GKT137831, a specific inhibitor of NADPH oxidase 1 (NOX1) and NOX4, and NG25, an inhibitor of TAK1 [29,30], were used to suppress ROS generation from NOX complexes and down-regulate the production of proinflammatory cytokines respectively. GKT137831 and NG25, simultaneously or separately, reduced the generation of ROS after challenge with S. uberis but cannot change the level of mROS (p < 0.05; Figure 6A). The bacterial counts of S. uberis in MECs were significantly higher in the inhibitor-treated groups (p < 0.05) ( Figure 6B). In addition, we measured the levels of cytokines and obtained similar results ( Figure S3). We inhibited the production of mROS by siECSIT to establish mROS role in regulating inflammation and anti-S. uberis activity. ROS and mROS levels decreased significantly after using siECSIT (p < 0.05; Figure 6C). Similar results were observed at TNF-α, IL-1β, and IL-6 expressions: their levels were up-regulated after S. uberis infection but siECSIT can reduce their expressions ( Figure 6D). Meanwhile, the bacterial counts of S. uberis in MECs were significantly higher in the siECSIT treatment group ( Figure 6E). These results demonstrated that mROS does play an important role against S. uberis infection in MECs.
Discussion
The intrusion signal (from molecules broadly shared by pathogens that could be recognized by immune system) of intracellular bacteria captured by PRRs is crucial for host to control inflammation and pathogen proliferation [31]. TLRs are one of the most ancient, conserved components of the immune system, and it has been established by our laboratory that they can sense and respond to S. uberis [4]. S. uberis is a kind of Gram-positive bacterium and TLR2 is the principal receptor that can sense its invasion [32]. However, TLR2 and TLR4 share the same delivery system, and current studies have not yet distinguished their exact roles in defending S. uberis infection. We used TLR2 −/− and TLR4 −/− mice to investigate the roles of these two high-correlation receptors in S. uberis infections thoroughly for the first time. Deficiency of TLR2, rather than TLR4, induced a more severe inflammatory response and tissues damage in mammary gland and bacterial viability was higher accordingly. These results confirmed that TLR2 detected S. uberis infection initiated the antibacterial immunological reaction and controlled the inflammatory status in mammary glands.
Proinflammatory cytokines, such as TNF-α, IL-1β, and IL-6, are secreted following activating TLRs and their respective downstream signaling pathways, mainly in immune cells [33]. They are involved in the upregulation of inflammatory reactions and play a role in regulating host defense response against pathogens to mediate innate immune responses. In this study, TNF-α, the initial factor in the cytokine storm, increased dramatically after S. uberis challenge in all variant mice. However, a similar change was only seen in TLR2 −/− mice for IL-1β. No significant change was observed for IL-6. These findings were consistent with previous reports that the expressions of TNF-α, IL-1β and IL-6 have a chronological order. For the samples detected here, they were only expressed at 24 h post-infection [34]. Compared with WT mice, the levels of TNF-α and IL-1β were obviously decreased in TLR2 −/− mice during S. uberis infection. This further demonstrated the important role of TLR2 in the interaction between S. uberis infection and the host. Similar changes were also seen in TNF-α levels in TLR4 −/− mice, which may be due to the complexity of the inflammatory response network after infection. Both positive and negative inflammatory factor feedbacks were present in S. uberis-infected mammary glands. The secondary inflammation induced by initial inflammatory factors may be caused in part by activation of TLR4. Therefore, the down-regulation of TNF-α caused by TLR4 deficiency was different from the loss of TLR2 expression, which cannot neutralize the inflammatory response caused by S. uberis.
Two distinct signaling pathways, the MyD88-dependent and TRIF-dependent pathways, are triggered by dimerized and activated TLRs [35]. Our experiments in vivo found that MyD88, instead of TRIF, was affected significantly in S. uberis infection, and thus confirmed that the MyD88-dependent pathway predominated in this process. This phenomenon also exists in other bacterial infections. For example, Wiersinga et al. reported similar results in Burkholderia pseudomallei infection [36]. In the MyD88-dependent pathway, MyD88 recruits IL-1 receptor-associated kinases and then phosphorylates and activates TRAF6 which in turn polyubiquinates TAK1, and induces the secretion of inflammatory cytokines during S. uberis infection in vivo and in vitro [4,37]. Recently, it has been suggested that activated TRAF6 translocates to the mitochondria, leading to ECSIT ubiquitination, and resulting in increased mROS generation [38]. This signaling pathway plays an important role in innate immune responses against intracellular bacteria. A recent study also showed that macrophages depleted of ECSIT and TRAF6 reduce TLR-induced ROS levels and significantly impair their ability to kill intracellular bacteria [26]. Sonoda et al. similarly found that estrogen-related receptor α and PPAR gamma Coactivator-1 β (PGC-1β) act together as key effectors of IFN-γ-induced mitochondrial ROS production and host defense [39]. Our study emphasized the importance of mROS in killing bacteria. Since detecting ROS in tissues is difficult, previous research has always detected the presence of components of the antioxidant system in organs, such as T-AOC, SOD, MDA, and UCP2 to indirectly reflect the production of ROS [25,40]. Our results showed that S. uberis challenge caused significant changes in redox status in mammary glands. This indicates that the TLRs/MyD88/TRAF6/ECSIT/mROS axis participated in the defense responses to S. uberis infection.
The inflammatory phenomena in mammary glands involve integrated responses of all kinds of mammary cells including macrophages, PMNs, lymphocytes, MECs, and even matrix cells [41]. In the past decade, we have paid more attention to the defensive ability of MECs because they are the most numerous cells in the udder, and we have detected TLRs-mediated signaling pathways and secretion of more than 40 cytokines in MECs [33]. In addition, we showed that S. uberis adhered and internalized in MECs indicating that MECs are one of the main target cells of S. uberis (data not published). Intriguingly, MECs are not real immune cells and have distinctive responses to bacterial infections. For example, we have found that the PI3K/Akt/mTOR pathway in MECs generates a positive contribution to inflammation following viable S. uberis challenge, which is not consistent with the usual situation in some immune cells [11]. Thus, we treated MECs with specific siRNAs targeting to TLR2 and/or TLR4 and then evaluated the effect of S. uberis challenge on the expression of key adaptor proteins. The results confirmed that the MyD88-dependent pathway predominated in S. uberis-infected MECs after TLR2 activation. A similar signal transfer process is reported in macrophages infected with Mycobacterium tuberculosis, another Gram-positive bacterium [42]. In the present study, we were interested in whether the TLR2/MyD88/TRAF6/ECSIT axis regulated the production of ROS. Suppression of TLR2, but not TLR4, reduced the level of ROS and mROS in MECs after S. uberis challenge. This result was further confirmed by the detection of UCP2 that separates oxidative phosphorylation from ATP synthesis and thus improves the production of ROS and mROS [11]. It is interesting that UCP2 levels in MECs decreased while increased in tissues during S. uberis infection. This may be because ROS level increased with the S. uberis infection, and MECs are not professional immune cells; it cannot cope with this rapid oxidative change, so UCP2 increased at the same time due to a self-protection mechanism. The mammary gland has not only MECs, but also many endothelial cells and immune cells. The decline in UCP2 is the result of the combined force of various cells. Besides, the mammary glands indeed need stronger oxidizing power to resist infection by S. uberis. It also indicates that MECs have their own unique defense mechanisms and more research is required to prove this.
Initially, mROS was considered to be a by-product of bio-oxidation, whose synthesis cannot be regulated. Numerous studies have established that oxidative phosphorylation in mitochondria is the main pathway for mROS production and the main source of ROS [43]. Expressing catalase in mitochondria could effectively reduce the production of mROS, thereby reducing the killing effect of macrophages on pathogens, indicating that mROS is a key driver in the process of antibacterial activity [10]. Our previous study shows that TLR2 regulates the generation of ROS including mROS during S. uberis infection both in vivo and in vitro [4]. We suggested that TLR2-mediated mROS was involved in S. uberis infection. To confirm our hypothesis, GKT137831 and NG25 were used alone or simultaneously to suppress ROS from NOX complexes and the production of proinflammatory cytokines. We found that the antibacterial activity of MECs was restrained to some extent, and this established that ROS from NOX complexes and cytokines were involved in the host defensive reaction, which was consistent with our previous study [11]. Furthermore, we inhibited mROS synthesis by siECSIT and investigated the changes in inflammation and the effect of reducing mROS on bacterial viability. The bacterial counts of S. uberis were significantly higher in the siECSIT treatment group. These results demonstrated that TLR2-mediated mROS was a key factor against S. uberis infection in MECs. It is worth noting that because knockout of TLR4 also reduced ECSIT expression, we cannot eliminate that TLR4 may also mediate mROS production.
In conclusion, mROS participated in the host response against S. uberis infection, and TLR2 was involved in sensing S. uberis invasion and controlling mROS production by regulating the expressions of TRAF6 and ECSIT. Additionally, the function of mROS against S. uberis infection probably relied on its ability to regulate cytokine levels, thereby controlling the level of inflammation. However, we must point out that due to the limitations of the current experiment methods, we cannot provide more sufficient in vivo model data to further validate our conclusions. Notwithstanding this limitation, this study increased our understanding of the molecular defense mechanisms in S. uberis mastitis and provided theoretical support for the development of prophylactic strategies for this critical disease. This study increased our understanding of the molecular defense mechanisms in S. uberis mastitis and provided theoretical support for the development of prophylactic strategies for this critical disease.
Supplementary Materials: The following are available online at http://www.mdpi.com/2073-4409/9/2/494/s1, Table S1: Oligonucleotide sequences used for RT-qPCR, Figure S1: TLR2/4 mediates the NAGase activity after challenge with S. uberis in the supernatant of MECs, Figure S2: TLR2/4 mediates the inflammatory response after challenge with S. uberis in MECs, Figure S3: The suppression of mROS reduces the inflammation factors after challenge with S. uberis in the supernatant of MECs, Figure S4: The protein expression of ECSIT were determined by Western blot after using siECSIT in MECs.
|
2020-02-27T09:29:24.659Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "481abf627e28a4cd51694e471cff5cf7e660db2c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells9020494",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c5ae22660fea87529d5fa564b9ba582987aaa0f8",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
238223000
|
pes2o/s2orc
|
v3-fos-license
|
Effect of food insecurity on mental health of patients with tuberculosis in Southwest Ethiopia: a prospective cohort study
Objective The objective of this study is to investigate the effect of food insecurity on the mental health of patients with tuberculosis (TB) in Ethiopia. Design A prospective cohort study. Setting Health centres and hospitals located in Jimma zone, Southwest Ethiopia. Participants Patients with TB who had recently been diagnosed with TB and started directly observed treatment in the selected 26 health institutions from October 2017 to October 2018. A total of 268 patients were followed for 6 months and data were collected at recruitment and two follow-up visits (at 2 and 6 months). Patients with multidrug-resistant TB were not included in the study. Main outcome measures Mental distress was measured by the Self-Reporting Questionnaire-20 while food insecurity was assessed by using the Household Food Insecurity Access Scale. Results A total of 268 patients were recruited and there was no lost to follow-up. The prevalence of food insecurity at baseline, first and second follow-up was 49.3%, 45.9% and 39.6%, respectively. Of these, 28.0% of them reported severe food insecurity at baseline which declined to 23.5% at the end of the sixth month. Likewise, the prevalence of mental distress at baseline was 61.2% but declined to 22.0% at the second follow-up. At baseline, 77.3% of patients with mental distress reported severe food insecurity but declined to 46.0% at second follow-up. In the final model, severe food insecurity (OR 4.7, 95% CI 2.4 to 9.4) and being a government employee (adjusted odds ratio (aOR) 0.3, 95% CI 0.1 to 0.9) were associated with mental distress. Conclusion In this study, food insecurity was associated with mental distress over the course of follow-up. Likewise, there is a high prevalence of food insecurity and mental distress among patients with TB on treatment. Therefore, early assessment and interventions for food insecurity may improve the mental health of patients with TB on treatment.
-"Good surveillance system and presence of follow-up" does not make the data reliable. Suggest changing to emphasize systems in place that ensure high data quality. Methods -HIV status is not mentioned in the list of independent variables. As this is an important subgroup for TB, please specify whether HIV status was collected or not. If not, please describe the reason.
-As mentioned before, please specify in Data Analysis whether baseline and follow-up measurements were included for food insecurity and mental distress. If all time points were included, this should be specified.
-Page 9 line 25. Suggest referencing R in addition to R Studio, as R is the computational software, whereas R Studio is a user interface environment.
Results
-In Table 1, suggest adding the N for each subgroup in addition to the Total %. If there was loss to follow-up please put the N for each time point in the header.
-The high prevalence of severe food insecurity among day laborers is striking. I think this should be briefly mentioned in the Results (the magnitude of food insecurity) and Discussion narrative.
-The percentages reported in the magnitude of mental distress seems to be described incorrectly as column percent instead of row percent, which is what is presented in Table 2. For example, it is incurred to say that "about 3/4th (77.3%) of patients with mental distress were suffering from severe food insecurity at baseline". Instead, Table 2 appears to show that 77.3% of patients with severe food insecurity at baseline had mental distress.
- Table 3 shows the intercept only model with no data except BIC. I suggest removing this column from Table 3.
Discussion
-Page 20 lines 21-28. I think one would expect different prevalence of food insecurity in different settings. No need to explain why it's different between Ethiopia and Indonesia.
-Page 20 lines 33-37. Since the baseline prevalence falls in between the range, it seems strange to include a rationale for why the prevalence is higher in TB patients.
-Page 21 lines 55-56 and page 22 line 3. Suggest describing in greater detail the setting in which the cited studies were done. For example, two of the studies were done among TB patients (#35 and #37), and #37 did not find an association between food insecurity and mental distress. Please discuss possible reasons for the disparate finding. -Strengths. As noted above, the longitudinal study design is only a strength if the analysis is done to take advantage of the temporality between exposure and outcome.
REVIEWER
Dorado, Julieta B Department of Science and Technology-Food and Nutrition Research Institute REVIEW RETURNED 24-Feb-2021
GENERAL COMMENTS
The reviewer provided a marked copy with additional comments. Please contact the publisher for full details.
VERSION 1 -AUTHOR RESPONSE
Reviewer: 1 Comments to the Author: This is a well done longitudinal study of a very important public health question regarding the association between food insecurity and mental distress among people diagnosed with TB. However, while the study has many strengths, there appears to be a major flaw that limits the strength of the evidence presented. The authors did not specify whether the analysis involved the effect of baseline food insecurity on subsequent mental disorders at follow-up. I assume then that the analysis included all time points for both food insecurity and mental distress. If true, the analysis approach negates the benefit of doing a longitudinal study as the temporality between the exposure and outcome is ambiguous. I suggest changing the analysis so that only baseline food insecurity and follow-up mental distress measurements are included in the models. Alternatively, the authors can use advanced methods to account for time-dependent exposures: marginal structural models or inverse probability weighting. Response: We included all time points for both food insecurity and mental distress because we were interested to look at the overall effect of food insecurity on mental distress over time controlling for potential confounders. Descriptively we reported food insecurity and mental distress at different time points. So, we are interested to keep the current analysis.
Additional considerations: Abstract -Suggest specifying that the 2 follow-up visits are at 2 and 6 months after treatment initiation. Response: We amended the manuscript based on the comment.
-Please indicate that MDR-TB patients are not included Response: We amended the manuscript based on the comment.
-In Results, indicate the sample size and loss to follow-up during 6 months Response: We amended the manuscript based on the comment.
-In line 36-37, specify whether you mean patients with baseline mental distress or mental distress at any time Response: In this line, we tried to describe mental distress at baseline and second follow. So, we amended the manuscript as follows: At baseline, 77.3% of patients with mental distress reported severe food insecurity but declined to 46.0% at second follow-up.
-In line 40 -42, specify the time points for food insecurity and mental distress. If all time points were included, this should be specified. Response: We amended the manuscript based on the comment. Strengths -As mentioned above, if all time points of food insecurity and mental distress were included in the models, the benefit of the longitudinal design is limited. Response: We appreciate the reviewer's comment. Still using this gives us more power even though all time points of food insecurity and mental distress are included.
-"Good surveillance system and presence of follow-up" does not make the data reliable. Suggest changing to emphasize systems in place that ensure high data quality. Response: We agree that ref#5 is not about poor mental health but about the link between food insecurity and tuberculosis as well as treatment outcomes. So, we restructured the sentence and references. Methods -HIV status is not mentioned in the list of independent variables. As this is an important subgroup for TB, please specify whether HIV status was collected or not. If not, please describe the reason. Response: We collected data regarding HIV status of the patients, and included this information in the manuscript both in the methods on page 9 and result sections on page 10 last paragraph.
-As mentioned before, please specify in Data Analysis whether baseline and follow-up measurements were included for food insecurity and mental distress. If all time points were included, this should be specified. Response: We included all time points in the analysis and included this information in the manuscript as follows: All time points for food insecurity and mental distress were included in the analysis.
-Page 9 line 25. Suggest referencing R in addition to R Studio, as R is the computational software, whereas R Studio is a user interface environment. Response: Thank you for the comment. We amended the manuscript based on the comment. Results -In Table 1, suggest adding the N for each subgroup in addition to the Total %. If there was loss to follow-up please put the N for each time point in the header. Response: We accepted the comment and added frequency and percentage to table 1.
-The high prevalence of severe food insecurity among day laborers is striking. I think this should be briefly mentioned in the Results (the magnitude of food insecurity) and Discussion narrative. Response: We accepted the comment and add severe food insecurity among daily laborers to the result and discussion.
-The percentages reported in the magnitude of mental distress seems to be described incorrectly as column percent instead of row percent, which is what is presented in Table 2. For example, it is incurred to say that "about 3/4th (77.3%) of patients with mental distress were suffering from severe food insecurity at baseline". Instead, Table 2 appears to show that 77.3% of patients with severe food insecurity at baseline had mental distress. Response: We corrected the sentence as follows: About 3/4th (77.3%) of patients with severe food insecurity had mental distress at baseline but it dropped to 64.6% and 46.0% in the second and sixth months respectively.
- Table 3 shows the intercept only model with no data except BIC. I suggest removing this column from -Page 20 lines 33-37. Since the baseline prevalence falls in between the range, it seems strange to include a rationale for why the prevalence is higher in TB patients. Response: We accepted the comment and deleted the explanation.
-Page 21 lines 55-56 and page 22 line 3. Suggest describing in greater detail the setting in which the cited studies were done. For example, two of the studies were done among TB patients (#35 and #37), and #37 did not find an association between food insecurity and mental distress. Please discuss possible reasons for the disparate finding. Response: In the references, we cited there is an association between food insecurity and mental distress or depression which is one component of mental distress. However, reference #37 is cited by error and replaced by another one.
-Strengths. As noted above, the longitudinal study design is only strength if the analysis is done to take advantage of the temporality between exposure and outcome. Response: We are grateful for the reviewer's comment. Since the longitudinal study design in this study has more advantages we considered it as a strength.
Reviewer: 2 Comments to the Author: * the prevalence of TB was not indicated in the manuscript, it would have provided good context for pursuing this research topic Response: We are grateful for the reviewer's comment. Since the study was entirely conducted among patients with TB we did not mention prevalence of TB. However, we added the prevalence of different type of TB on page 10.
2. -the research question was not explicitly defined in the manuscript Response: We have defined our objective in the abstract on page 2 lines 5-8 as well as at the end of the introduction on page 5 lines 15-18.
3. -a levelling off of the definition of "longitudinal' study maybe useful; as it is the study's data collection lasted only for 6 months... for a research to be considered as longitudinal.. would mean more than a year and with outcomes that is being followed-up in a much longer period. Response: We are grateful reviewer's comment. Since the TB treatment follow up is only 6 months, we followed the patient from starting the treatment until completing the treatment. So we prefer to use the word longitudinal because we followed patients at different time points which are still acceptable. 10. -Table presentation
GENERAL COMMENTS
The authors did a great job of addressing most of the concerns and the revised manuscript is much improved. However, the use of the term "longitudinal" analysis is still inappropriate. While the study design was longitudinal (i.e. repeated measures during followup of individual participants), the analysis was done in a cross-sectional fashion (i.e. association between food insecurity and depression at all time points). Therefore, it is impossible to know whether the exposure (food insecurity) occurred prior to the outcome (depression). I therefore suggest the following changes to the manuscript to more accurately reflect the analytical design: -Throughout the abstract and main text, remove the term longitudinal and replace with describing that data were collected at 3 time points for each participant -Please change the term "predicted" to "was associated with"
VERSION 2 -AUTHOR RESPONSE
Reviewer: 1 Dr. Sanghyuk Shin, University of California Irvine Comments to the Author: The authors did a great job of addressing most of the concerns and the revised manuscript is much improved. However, the use of the term "longitudinal" analysis is still inappropriate. While the study design was longitudinal (i.e. repeated measures during followup of individual participants), the analysis was done in a cross-sectional fashion (i.e. association between food insecurity and depression at all time points). Therefore, it is impossible to know whether the exposure (food insecurity) occurred prior to the outcome (depression). I therefore suggest the following changes to the manuscript to more accurately reflect the analytical design: -Throughout the abstract and main text, remove the term longitudinal and replace with describing that data were collected at 3 time points for each participant -Please change the term "predicted" to "was associated with" Response: We are grateful for the reviewer comment. Rather than removing the term "longitudinal", we made analysis by including a baseline food insecurity and follow-up mental distress. Because we are interested to keep the term "longitudinal". So, we amended the manuscript based on the new result.
REVIEWER
Shin, Sanghyuk University of California Irvine REVIEW RETURNED 09-Aug-2021
GENERAL COMMENTS
It's still unclear to me whether longitudinal analysis was done. For the analysis of follow-up data, it appears that the models include food insecurity at follow-up with mental distress at follow-up. This analysis is informative, but cannot distinguish which came first, the food insecurity or mental distress. As such, it does not take advantage of the longitudinal design of the study. A longitudinal analysis will involve models with baseline food insecurity and subsequent mental distress at follow-up. I don't think the current analysis addresses this issue of temporality. As with my prior comments, I suggest changing the analysis or changing the text to remove the terms "longitudinal" and "predict".
|
2021-09-30T13:07:37.781Z
|
2021-09-01T00:00:00.000
|
{
"year": 2021,
"sha1": "10992bdba300fa78a80596203c5fd9128f488c45",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/9/e045434.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d5f40a99d6a441129869e8cc75a3b7913a26ec0",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256883618
|
pes2o/s2orc
|
v3-fos-license
|
Genome-wide analysis of the polyphenol oxidase gene family reveals that MaPPO1 and MaPPO6 are the main contributors to fruit browning in Musa acuminate
Introduction Polyphenol oxidases (PPOs), which are widely present in plants, play an important role in the growth, development, and stress responses. They can catalyze the oxidization of polyphenols and result in the browning of damaged or cut fruit, which seriously affects fruit quality and compromises the sale of fruit. In banana (Musa acuminata, AAA group), 10 PPO genes were determined based on the availability of a high-quality genome sequence, but the role of PPO genes in fruit browning remains unclear. Methods In this study, we analyzed the physicochemical properties, gene structure, conserved structural domains, and evolutionary relationship of the PPO gene family of banana. The expression patterns were analyzed based on omics data and verified by qRT-PCR analysis. Transient expression assay in tobacco leaves was used to identify the subcellular localization of selected MaPPOs, and we analyzed the polyphenol oxidase activity using recombinant MaPPOs and transient expression assay. Results and discussion We found that more than two-thirds of the MaPPO genes had one intron, and all contained three conserved structural domains of PPO, except MaPPO4. Phylogenetic tree analysis revealed that MaPPO genes were categorized into five groups. MaPPOs did not cluster with Rosaceae and Solanaceae, indicating distant affinities, and MaPPO6/7/8/9/10 clustered into an individual group. Transcriptome, proteome, and expression analyses showed that MaPPO1 exhibits preferential expression in fruit tissue and is highly expressed at respiratory climacteric during fruit ripening. Other examined MaPPO genes were detectable in at least five different tissues. In mature green fruit tissue, MaPPO1 and MaPPO6 were the most abundant. Furthermore, MaPPO1 and MaPPO7 localized in chloroplasts, and MaPPO6 was a chloroplast- and Endoplasmic Reticulum (ER)-localized protein, whereas MaPPO10 only localized in the ER. In addition, the enzyme activity in vivo and in vitro of the selected MaPPO protein showed that MaPPO1 had the highest PPO activity, followed by MaPPO6. These results imply that MaPPO1 and MaPPO6 are the main contributors to banana fruit browning and lay the foundation for the development of banana varieties with low fruit browning.
Introduction: Polyphenol oxidases (PPOs), which are widely present in plants, play an important role in the growth, development, and stress responses. They can catalyze the oxidization of polyphenols and result in the browning of damaged or cut fruit, which seriously affects fruit quality and compromises the sale of fruit. In banana (Musa acuminata, AAA group), 10 PPO genes were determined based on the availability of a high-quality genome sequence, but the role of PPO genes in fruit browning remains unclear.
Methods: In this study, we analyzed the physicochemical properties, gene structure, conserved structural domains, and evolutionary relationship of the PPO gene family of banana. The expression patterns were analyzed based on omics data and verified by qRT-PCR analysis. Transient expression assay in tobacco leaves was used to identify the subcellular localization of selected MaPPOs, and we analyzed the polyphenol oxidase activity using recombinant MaPPOs and transient expression assay.
Results and discussion: We found that more than two-thirds of the MaPPO genes had one intron, and all contained three conserved structural domains of PPO, except MaPPO4. Phylogenetic tree analysis revealed that MaPPO genes were categorized into five groups. MaPPOs did not cluster with Rosaceae and Solanaceae, indicating distant affinities, and MaPPO6/7/8/9/10 clustered into an individual group. Transcriptome, proteome, and expression analyses showed that MaPPO1 exhibits preferential expression in fruit tissue and is highly expressed at respiratory climacteric during fruit ripening. Other examined MaPPO genes were detectable in at least five different tissues. In mature green fruit tissue, MaPPO1 and MaPPO6 were the most abundant. Furthermore, MaPPO1 and MaPPO7 localized in chloroplasts, and MaPPO6 was a chloroplast-and Endoplasmic Reticulum (ER)localized protein, whereas MaPPO10 only localized in the ER. In addition, the Introduction Polyphenol oxidase (PPO) is a copper-binding enzyme that widely exists in animals, plants, fungi, and bacteria (Mayer, 2006). According to its specific substrate and mechanism of action, it can be divided into three categories: tyrosinases (EC 1.14.18.1), catechol oxidases (EC 1.10.3.1), and laccase (EC 1.10.3.2) (Mishra and Gautam, 2016). Many studies have reported that PPO is induced in response to biotic and abiotic stress in plants, and it has been implicated in several functional processes, including participating in plant defense and the synthesis of plant-specific metabolites (Constabel et al., 1995;Gandía-Herrero and García-Carmona, 2013;Araji et al., 2014;Sullivan, 2015).
The browning reaction of plants is considered to be related to PPO. The oxidation of phenolic substrates by polyphenol oxidase (EC 1.10.3.1) is thought to be the major cause of the brown discoloration of many fruit and vegetables during harvesting, storage, transportation, and processing (Vaḿos-Vigyaźóand Haard, 1981;Queiroz et al., 2008;Moon et al., 2020). Catecholase is mainly distributed in plants and typically catalyzes the oxidation of odiphenols to o-quinones in the presence of molecular oxygen. Quinones are highly reactive and spontaneously cross-link with amino acids, proteins, and other phenolic compounds to form brown polymers that appear in plant extracts and wounded tissues (Mayer, 2006;Sullivan, 2015), which also cause significant economic impacts, both to primary food producers and the food processing industry (Singh et al., 2018;Dias et al., 2020).
The distribution and function of PPO proteins differ in different plants (Tran et al., 2012). Most PPO proteins are transported to the thylakoid lumen in the chloroplast (Koussevitzky et al., 2008), and they have also been found in vacuoles, the cytosol, and other organelles (Nakayamaa et al., 2001;Tran and Constabel, 2011), while phenolic compounds are generally confined to the vacuoles (Vaughn and Dnke, 1984). Given the physical separation of PPO enzymes from their substrates, the PPO enzymesubstrate interaction requires the destruction of cell compartmentation by insect, mechanical damage, diseases, or microorganism invasion (Boeckx et al., 2015;Maioli et al., 2020).
Banana (Musa acuminata, AAA group) is one of the world's most important fruit crops and is widely cultivated in tropical countries due to its high nutritional and economic value (Padam et al., 2014). However, enzymatic browning has a serious impact on the development of the banana industry, especially in the process of harvesting and post-harvest, such as during handling, storage, and processing (Gooding et al., 2001). The approach to the prevention of enzymatic browning is divided into physical and chemical methods. Physical methods to regulate enzymatic browning include thermal treatment, prevention of oxygen exposure, use of low temperature, and irradiation (Tinello and Lante, 2018). Chemical methods to inhibit PPO activity include acidification or reduction using antioxidants, chelating agents, or natural extracts (Moon et al., 2020). Most of these methods have negative consequences. Therefore, the most attractive method for preventing food browning is through natural methods (Hamdan et al., 2022). Therefore, the development of new banana varieties with low fruit browning is the best way to tackle this problem through molecular breeding approaches, such as genome editing.
In banana, Gooding et al., 2001 identified four MaPPO genes from the Cavendish subgroup and indicated that fruit browning during ripening is due to the release of the pre-existing enzyme through detecting the transcripts of MaPPO genes. The detailed sequence information of these four MaPPOs is not available, and the main contributors to fruit browning remain obscure. In this study, the PPO genes were analyzed genome-wide in Musa acuminate, and 10 putative PPO genes were identified. The expression patterns of MaPPOs were comparatively examined based on transcriptome, proteome, and real-time reverse transcription-PCR (qRT-PCR) data. Moreover, the subcellular localization of the four MaPPOs was identified using transient expression in tobacco leaves. In addition, the PPO activity of the selected MaPPOs was analyzed in vivo and in vitro. The results shed light on the role of MaPPOs in fruit browning and provide a theoretical basis for creating new varieties with low fruit browning.
Materials and methods
Determination of PPO genes in Musa acuminate BLAST and HMMER were used to identify PPO genes with conserved structures in banana. Musa acuminate amino acid sequences (MaPPOs) were extracted from the M. acuminate assembly (https://banana-genome-hub.southgreen.fr/). To examine the presence of the conserved domain, a batch search of the sequences for all obtained MaPPO genes was performed through the online databases of SMART (http://smart.embl.de/smart/set_ mode.cgi?GENOMIC=1) and NCBI CDD (https://www.ncbi.nlm. nih.gov/cdd/). The MWs, PIs, and hydrophilia parameters were evaluated using an online tool on the ExPasy server (https://web. expasy.org/protparam/).
Plant material and treatments
Mature green banana fruit (Musa acuminata AAA group, Cavendish subgroup) was obtained from a local market in Guangdong, China. Ethephon-induced ripening, 1-MCP-delayed ripening, and natural ripening of the fruit were evaluated as described previously (Li et al., 2020). Each treatment contained three biological replicates and was stored at 22°C with about 90% relative humidity until fully ripe. The fruit samples from various development stages were collected from local banana plantations in Guangzhou, China. The sampling time point was determined by the number of days after flowering. The fruit samples at each time point were collected and quickly frozen in liquid nitrogen and then stored at −80°C until utilization.
RNA extraction and qRT-PCR analysis
The fine powder of each sample ground in liquid nitrogen was used to isolate total RNA, as described in a previous report (Asif et al., 2000). The elimination of any potentially contaminated gDNA and reverse transcription of cDNA were performed using a reverse transcription kit (TaKaRa, Dalian, China) according to the manufacturer's protocol. qRT-PCR and data analysis were conducted as per our previously reported method (Li et al., 2020), and MaCAC (clathrin adaptor complex medium subunit, HQ853240) was used as a reference gene (Chen et al., 2011). The primers with high amplification efficiency (90-110%) were designed using Beacon Designer 7, and these are listed in Supplementary File 2. The qRT-PCR assay was conducted using the Applied Biosystems Q5 Real-Time PCR System (ThermoFisher, USA) using TB Green ® Premix kit (Tli RNaseH Plus, TAKARA, Dalian, China).
Analysis of the expression patterns of MaPPOs using transcriptomic data
The expression data of MaPPO genes during fruit ripening were retrieved from our previous transcriptomic data on fruit ripening (Li et al., 2020), and proteomic analysis of the same samples was performed at the same time. The protein expression level of MaPPOs was obtained using proteome data. For analysis of expression patterns during fruit development, banana fruit was collected 15, 30, 45, 60, and 75 DAF (days after flower), and total RNA was extracted for sequencing analysis by Nuoji Biotechnology Company (China). The relative expression of MaPPO genes or proteins was displayed as a heat map drawn with TBtools (Chen et al., 2020). The raw reads of the transcriptome were deposited in the National Center for Biotechnology Information Sequence Reads Archive (SRA) under accession number PRJNA598018.
Subcellular localization analysis
The CDSs of each MaPPO without the stop codon were amplified by PCR from cDNA and introduced into the pCambia1300-GFP vector (modified from pCambia1300). The expression of GFP and GFPinfused MaPPOs was driven by the cauliflower mosaic virus (CaMV) 35S promoter. The fusion constructs and the control GFP vector were transiently expressed in tobacco leaf cells using the Agrobacteriummediated method. The assay was performed as described previously (Sparkes et al., 2006). The ER tracker (Mravec et al., 2009) was co-expressed to indicate the localization of ER. GFP signals were examined with a fluorescence microscope (Zeiss LSM 710).
Prokaryotic expression and determination of PPO activity
The CDSs of MaPPO genes were introduced into the E. coli expression vector pCZN1. Protein expression was performed under the following conditions: an initial culture (OD600 = 0.5), followed by addition of 0.2 mM IPTG, and incubation at 15°C overnight. Cell lysates were examined by immunity blots using an anti-His tag antibody (Tiangen, Beijing, China). The recombinant MaPPO proteins were purified on Ni-NTA His-Bind resin (Novagen, USA), and the protein concentration was assayed using an Easy Protein Quantitative Kit (Trans, Beijing, China). PPO activity was conducted as described in the previous report using a polyphenol oxidase activity assay kit (Beijing Solarbio Science & Technology Co., Ltd., Beijing) (https://doi.org/10.1007/s11947-018-2232-0). PPO activity was calculated based on the amount of recombinant protein or the fresh weight of the tissue sample in the reaction system.
Transient overexpression of MaPPOs in banana fruit
The CDS of MaPPO gene were subcloned into pCMABIA1300 vector. The A. tumefaciens strain EHA105 including the constructed plasmids or control vector was injected into the mature-green banana fruits through the distal end using a syringe. The middle three hands from each banana bunch were used the experiment. At least eight fruit fingers were used for each treatment, and four ml A. tumefaciens was delivered into each fruit finger. One day after bacterium infiltration, injected fruits were dipped into ethephon solution (1000 times dilution from 40% ethephon) for 1 min, and stored at 22°C with 90% relative humidity for 5 d. Samples were harvested on Day 3 for examination of gene expression and PPO activity.
Determination of polyphenol oxidase genes in Cavendish banana
Through a genome-wide search in the genome database of M. acuminate (https://banana-genome-hub.southgreen.fr/tripal_ megasearch), a total of 10 full-length MaPPOs were identified. To identify the candidate coding sequences (CDSs) of the 10 MaPPOs, the coding regions were amplified with PCR using a cDNA template. The final MaPPOs were referred to as MaPPO1-MaPPO10 according to their order on the chromosomes. The concrete sequences of each PPO gene in cultivated banana (Musa acuminata AAA group, Cavendish subgroup) were determined by PCR amplification from cDNA and sequencing, and the detailed sequences are listed in Supplementary File 1. The characterization of these MaPPOs is presented in Table 1. The deduced MaPPO proteins had amino acid numbers from 238 to 595, molecular weights (MWs) from 25.76 to 67.57 kDa, hydrophilia parameters from -0.572 to -0.157, and isoelectric points (PIs) from 5.87 to 9.09.
Phylogenetic analysis of MaPPOs and PPOs from other plant species
To investigate the evolutionary relationship of the PPO gene family, an unrooted neighbor-joining (NJ) tree was constructed with 65 PPO proteins from wheat, barley, rice, banana, apple, strawberry, tomato, eggplant, and potato. The homologs of MaPPO4 were not included in the analysis because the conserved domain of MaPPOs was incomplete. As shown in Figure 1, the phylogenetic tree was characterized by five subgroups. The largest number of PPOs was observed in Group 2, with 26 PPOs, followed by Groups 5 (20) and 4 (9). Remarkably, MaPPOs were only distributed in Groups 1 and 5, and Group 1 had only 5 PPO proteins from banana and none from other examined plant species, suggesting that these PPOs might have been obtained in banana after their divergence or loss in other plant species. The remaining four PPO proteins in banana were concentrated in Group 5 and clustered in the same branch as other PPOs from monocots (wheat, barley, rice), indicating that they might have evolved from common ancestors. Interestingly, each group included PPO proteins from either monocots or dicots, and the PPO proteins from dicots were clustered into three individual groups (groups 2, 3, and 4), indicating that PPOs from monocots and dicots evolved independently from different lineages.
Gene structure and motif analysis of MaPPO genes
To gain more insights from the evolutionary relationship within the banana MaPPO gene family, a phylogenetic tree of 10 PPO proteins from banana was constructed using MEGA 11 using the NJ method with 1000 bootstrap replicates (Figure 2A). The motif architectures, conserved protein structures, and gene structures of 10 MaPPOs were examined within the phylogenetic context and visualized using TBtools. Seven out of 10 MaPPOs had one intron; one MaPPO had two introns, and two MaPPOs had no introns ( Figure 2B). Interestingly, two MaPPOs (MaPPO3/7) had no untranslated regions (UTR) structure ( Figure 2B). The phylogenetic tree of the MaPPO genes contained two subgroups. Generally, a gene within the same group had a similar and conserved structure in terms of exon number and intron length; for instance, all members in Group 2 had two exons and one intron. Based on searching the CDD and SMART databases, we identified three conserved domains (PPO1_KFDV, tyrosinase, and PPO1_DWL) in the MaPPO proteins and found them in all MaPPOs, except in MaPPO4, indicating their conserved biological functions ( Figure 2D). Moreover, we identified 10 conserved motifs in the PPO protein sequence using MEME ( Figure 2C), and five motifs (1, 2, 3, 5, and 8), one motif (6), and two motifs (4 and 9) were related to the N-terminal tyrosinase, intermediate PPO1_DWL, and C-terminal PPO1_KFDV domains, respectively. In addition, among most MaPPOs, two novel motifs (10 and 7) were identified at the N-terminal of the protein sequences.
Expression profiles of MaPPO genes in various tissues of banana
To investigate the roles of MaPPOs, we examined the expression levels of nine MaPPO genes in roots, pseudostems, corms, young leaves, mature leaves, bracts, and mature green fruit of banana using B A
FIGURE 4
Comparison of expression levels of polyphenol oxidase in fruit. (A) qRT-PCR analysis of the expression level of MaPPOs in mature green fruit. (B) Expression level of MaPPO proteins in response to ethephon. The heatmap was generated using proteome data. The color bar indicates the range of maximum and minimum values of relative expression in the heatmap. The gray color indicates that the protein is undetectable. Different letters indicate significant differences (P < 0.05) examined by one-way ANOVA. Expression profiles of MaPPOs in various tissues of Musa acuminate. The transcript level in one tissue was arbitrarily set to 1. Error bars represent the standard deviations of the mean value from three biological and three technical replicates. Different letters indicate significant differences (P < 0.05) examined by one-way analysis of variance (ANOVA).
qRT-PCR (Figure 3). Three MaPPO genes (MaPPO3/8/9) were undetectable in all examined tissues due to their low transcript abundance (data not shown). MaPPO1 was preferentially expressed in mature green fruit, and MaPPO2 and MaPPO10 were predominantly expressed in bract tissue. MaPPO6 and MaPPO7 had similar expression patterns and were highly expressed in young leaves, whereas MaPPO5 was highly expressed in mature leaves ( Figure 3). Differential expression of various MaPPOs could imply their different functions in various plant tissues. To further understand the role of MaPPO genes in fruit tissue, the transcript levels of all MaPPOs were compared. The data revealed that MaPPO1 had the highest expression in fruit, followed by MaPPO6, MaPPO10, MaPPO7, and MaPPO5 ( Figure 4A). Moreover, MaPPO1 and MaPPO6 had the highest protein expression levels in the proteome data of banana fruit ( Figure 4B), suggesting that they might play an important role in fruit tissues.
Expression analysis of MaPPO genes during fruit development and ripening
To determine the role of MaPPOs in fruit development and ripening, the transcriptomic data of the MaPPOs were analyzed. The expression of four genes (MaPPO1/6/7/10) had a relatively high transcript abundance and varied with stage ( Figure 5), whereas MaPPO2/3/5/8/9 were almost not detected throughout fruit development (data not shown). qRT-PCR analysis indicated that the expression patterns of these MaPPOs were similar to those observed in the transcriptomic analysis ( Figure 5). At the early developmental stage of fruit, MaPPO6/7/10 exhibited a high expression level, which decreased gradually to a very low level at 75 days after flowering (DAF) ( Figure 5). The MaPPO1 transcript was present at a relatively high level at 15 DAF, which decreased at 30 DAF and then increased to the highest expression level at later stages of development ( Figure 5). During the postharvest ripening of banana fruit, transcriptomic data showed that MaPPO1/4/5/6/7/10 were detectable after ethephon treatment, and MaPPO1 presented the highest expression level among them, whereas the other four MaPPOs were almost undetectable ( Figure 6A). To confirm the reliability of PPO gene expression in the transcriptome, the expression of the MaPPO genes was examined using qRT-PCR. MaPPO4 was not included in the following experiments due to its missing PPO-KFDV and PPO_DWL domains. A similar expression pattern of the MaPPO gene to that in the transcriptome data was observed during the ripening process ( Figure 6B). In response to ethephon treatment, the expression of MaPPO1 increased to the highest level at day 3 and then decreased gradually; the other four detectable MaPPO genes showed relatively low abundance. MaPPO6 and MaPPO7 exhibited gradually decreased expression patterns. MaPPO5 presented a relatively higher expression in the fruit senescence stage, whereas MaPPO10 presented an expression trend of decreasing first and then increasing throughout the ripening process. Under natural ripening and 1-Methylcyclopropene (1-MCP) treatment, a trend similar to that of ethephon treatment was observed, and the expression of each MaPPO was delayed with the postponement of the fruit ripening process ( Figure 6). Interestingly, MaPPO1 and MaPPO6 exhibited higher transcript levels both in the early fruit development stage ( Figure 5A, 15 d) and the mature green stage ( Figure 4A; Figure 6A, 0 d) and had higher protein expression levels in the mature green fruit stage ( Figure 4B, control), compared to other MaPPO genes.
Subcellular localization of MaPPOs
To further investigate the role of MaPPOs in fruit development and ripening, four relatively high-expression MaPPO genes were selected to examine subcellular localization. The CDS of each PPO gene was infused to the C-terminus of GFP (Figure 7). These GFP chimeras were transiently expressed in tomato leaf cells, and the localization of the expressed proteins was examined using confocal laser microscopy 48 h after Agrobacterium infection. The fluorescence signal of 35S::GFP was observed in the nucleus and cytoplasm (Figure 7), whereas the signal of MaPPO1::GFP and MaPPO7::GFP overlapped with the auto-fluorescence of chloroplasts ( Figure 7A), suggesting that MaPPO1 and MaPPO7 are chloroplast-localized proteins. The fluorescence signal of MaPPO6::GFP was observed both in chloroplasts and ER, and the signal of MaPPO10::GFP was co-localized with that of the endoplasmic reticulum (ER) marker, indicating that MaPPO6 was a chloroplast-and ER-localized protein and MaPPO10 was an ER-localized protein (Figure 7).
Polyphenol oxidase activity of MaPPO proteins
To determine whether MaPPO genes encoded an active polyphenol oxidase enzyme, MaPPO1/6/7/10 were chosen for prokaryotic expression in Escherichia coli. Recombinant MaPPO proteins with a His-tag were isolated and purified. The molecular weight of these proteins was verified with Western blot analysis using an anti-His antibody (Figure 8), and the molecular mass of recombinant PPOs was consistent with the predicted molecular mass, indicating successful expression in the prokaryotic expression system. The enzyme activity assay indicated that MaPPO1 had the highest activity among the examined MaPPO proteins, followed by MaPPO6, MaPPO7, and MaPPO10, and MaPPO1 yielded a PPO activity 7-fold higher than that of MaPPO10. The results indicate that MaPPO1 and MaPPO6 might contribute to the main PPO activity in banana fruit. To further examine the in vivo activity of MaPPO proteins, MaPPO1 and MaPPO6 were transiently overexpressed in banana fruit (Figure 9). In the MaPPO-overexpressing fruit, the expression level of MaPPO1 and MaPPO6 was about 20 times and 35 times that in the control fruit, suggesting successful expression of target genes ( Figure 9B). PPO activity analysis indicated that MaPPO1-and MaPPO6-overexpressing fruit had 6-and 5-fold higher PPO activity than the control fruit, implying that they are active polyphenol oxidase enzymes in banana ( Figure 9).
Discussion
As widespread copper-containing metalloenzymes in plants, PPOs play important roles in plant growth and development, stress resistance, and stress tolerance, and they widely exist in land plants, fungi, and some bacteria. The member numbers of the PPO gene family varied with species and were not directly involved in the size of the genome. No PPO genes were identified in Arabidopsis, Brassica napus, or some green algae, implying that PPOs are likely not necessary for primary metabolic function; rather, they are involved in secondary metabolism and ecological adaptation (Tran et al., 2012).
Although several MaPPO genes of banana have been reported to be available in the banana genome (Gooding et al., 2001), detailed information on these MaPPO genes is missing, and the role of MaPPOs is still not clear. In this research, we identified 10 PPO genes in banana with the available whole genome sequence. Most MaPPOs had one intron, while MaPPO1 had two introns, and MaPPO2/5 had no introns ( Figure 2B). Introns have been reported to regulate transcription, and intron-lacking genes might exhibit the rapid expression of mRNA in response to various stressors (He et al., 2021). This implies that MaPPO2/5 genes are transcribed faster under certain conditions to form mRNA.
Except for MaPPO4, all MaPPO proteins contained three conserved domains (KFDV, tyrosinase, and DWL). They may offer similar and conserved biological functions ( Figure 2D). Among them, MaPPO6/7/8/9 genes were located on chromosome 8 (Table 1), had high sequence similarity, and clustered together in the same group ( Figure 1A). We hypothesized that MaPPO7/8/9 originated from MaPPO6 by gene duplication. This is supported by the expression data ( Figure 4). Among them, MaPPO6 had the highest expression, MaPPO7 exhibited trace expression, and the transcripts of MaPPO8/9 were not detectable.
Generally, plant PPOs are primarily localized in plastids or chloroplasts (Tran et al., 2012), which are determined by the Nterminal targeting signal, a chloroplast transit peptide (cTP) responsible for the translocation of PPO proteins to the thylakoid B C A lumen (Koussevitzky et al., 2008). In contrast, aureusidin synthase 1 (AmAS1) in Antirrhinum majus and PtrPPO13 in Populus trichocarpa have been reported to be localized in the vacuole (Ono et al., 2006). Nevertheless, in potato tubers and tea, PPOs were found in almost all subcells and organelles (Vaḿos-Vigyaźóand Haard, 1981;Zhang and Sun, 2021), and a chalcone synthase in wine grapes was also localized in multiple organelles, predominantly in the plastid, rough endoplasmic reticulum (ER), cytoplasm, vacuole, and cell wall (Tian et al., 2008). In banana, MaPPO1 and MaPPO7 were distributed in chloroplasts, and MaPPO6 was localized both in chloroplasts and in ER. MaPPO10 is an ER-localized PPO, implying that it might be related to the biosynthesis of secondary metabolites because chalcone synthase, the first enzyme of the flavonoid pathway, was found to be localized in the cytosol and ER (Kaintz et al., 2014). These data suggest that the location of PPO varies with plant species and that PPOs might function differently with various subcellular locations.
PPO genes of most plant species are present as a multigene family, except for several PPO-lacking plant species (Tran et al., 2012). The qRT-PCR results and transcriptome data indicated that the transcript levels of MaPPOs are distinctly regulated in various tissues, fruit development, or during fruit ripening, and 4 of the 10 MaPPO genes were not detectable in the examined tissues using qRT-PCR. This is consistent with that of other plant species (He et al., 2021). The expression pattern of MaPPO6 and MaPPO7 was similar to that of BPO1 in a previous report (Gooding et al., 2001) and was not detected in mature leaves but was present in young leaves and stems. Moreover, the expression of detected MaPPOs genes, except MaPPO1, was consistent with that in other plant species and generally decreased during fruit development and maturation ( Figure 5B, fruit development), suggesting that fruit browning during ripening might result in a pre-existing PPO that is expressed in early fruit development. Our data revealed that MaPPO1 is specifically expressed and is the highest-expressed PPO gene in the fruit tissue of all examined PPO genes. The proteome data also indicated that MaPPO1 is the most abundant PPO protein during fruit ripening, followed by MaPPO6 ( Figure 4B). Therefore, MaPPO1 and MaPPO6 might be the main contributors to the PPO activity that causes fruit browning ( Figures 4A, B, control; Figure 5A 15 d; Figure 6A, 0 d). Moreover, their high PPO activity in vivo and in vitro and their chloroplast location also supported this (Figures 7-9). We speculated that the functions of the two MaPPO genes may be redundant or partially overlapping. Therefore, knockout of MaPPO1 or MaPPO6, or both, in banana could result in reduced detrimental oxidative browning in fruit tissues. In subsequent research, we will develop a PPO-mutated banana using our group's established genome editing approach, and we believe that the analysis of mutant plants will improve our understanding of the role of MaPPOs in banana.
Conclusions
In summary, 10 MaPPO genes were identified through genomewide analysis based on released genome data. Analysis of gene structures, motifs, and sequence features revealed the conservation and divergence of MaPPOs. Expression analysis indicated that MaPPO1 and MaPPO6 exhibit high transcript abundance in mature green fruit, paralleled by the protein content data, and MaPPO1 was preferentially expressed in fruit tissues. All detectable MaPPO genes in mature green fruit showed reduced expression trends during fruit ripening, except MaPPO1. Moreover, MaPPO1 and MaPPO6 showed chloroplast localization in tobacco leaf cells, indicating that MaPPO1 and MaPPO6 play important roles in fruit browning. In addition, the PPO activity assay of MaPPO in vivo and in vitro supported the idea that MaPPO1 and MaPPO6 might offer major PPO activity in fruit tissues. This research reveals that MaPPO1 and MaPPO6 are major contributors to fruit browning during fruit ripening, which will further our understanding of the function of MaPPO genes and provide candidate genes for developing new varieties with low fruit browning in the future.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm.nih.gov/ genbank/, PRJNA598018.
Funding
Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2023-02-16T16:02:35.838Z
|
2023-02-14T00:00:00.000
|
{
"year": 2023,
"sha1": "a3ad53b7f8f7e3bb8e89d8906b2eee473ccd8481",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2023.1125375/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7646f7c1b015eab6fcd73ec43291d470d1e0233",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214769453
|
pes2o/s2orc
|
v3-fos-license
|
Bleeding Index and Monocyte Chemoattractant Protein 1 as Gingival Inflammation Parameters after Chemical-Mechanical Retraction Procedure
Objective: A widely used chemical-mechanical method of gingival retraction can cause gingival tissue damage. The aim of this study was to test the influence of the chemical-mechanical gingival retraction procedures on the gingival bleeding index (GBI) and the salivary concentration of monocyte chemoattractant protein 1 (MCP-1) as an indicator of inflammatory changes in the gingiva. Materials and Methods: The effects of 2 different retraction agents (aluminum chloride and ferric sulfate) were compared, particularly their tissue damaging effect during tooth preparation. Therefore, GBI values and the salivary concentration of MCP-1 were assessed during the chemical-mechanical method of gingival retraction in a homogenous group of respondents. The subjects (n = 60) were divided into 2 experimental groups (G1 and G2) regarding the need for tooth preparing and making artificial crowns. Each group was further divided into 2 subgroups (R1 and R2) according to the type of the gingival retraction agent used (aluminum chloride and ferric sulfate). Results: Compared to the values at the study start, a statistically significant increase in GBI and salivary MCP-1 (p < 0.001) 1 day after gingival retraction agent application was observed in both experimental groups. After 72 h, the values were lower than in the second observation period but still statistically significantly higher compared to the study start (p < 0.001), which indicated the reversibility of the tissue changes. Conclusion: Higher values of the investigated parameters were observed in the group of subjects with prepared teeth, and clinical changes were more pronounced after the use of ferric sulfate.
method of gingival retraction in a homogenous group of respondents. The subjects (n = 60) were divided into 2 experimental groups (G1 and G2) regarding the need for tooth preparing and making artificial crowns. Each group was further divided into 2 subgroups (R1 and R2) according to the type of the gingival retraction agent used (aluminum chloride and ferric sulfate). Results: Compared to the values at the study start, a statistically significant increase in GBI and salivary MCP-1 (p < 0.001) 1 day after gingival retraction agent application was observed in both experimental groups. After 72 h, the values were lower than in the second observation period but still statistically significantly higher compared to the study start (p < 0.001), which indicated the reversibility of the tissue changes. Conclusion: Higher values of the investigated parameters were observed in the group of subjects with prepared teeth, and clinical changes were more pronounced after the use of ferric sulfate.
Introduction
Fixed prosthodontic appliance therapy involves making artificial crowns or bridges for the purpose of rehabilitating the dental arch. A well-made fixed restoration intimately rests on the dental tissue in the region of the preparation boundary (demarcation line). Gingival retraction is necessary in cases in which the demarcation line is localized at or below the edge of the gingiva, with the aim of providing accurate imprinting [1].
The most commonly used chemical-mechanical method of gingival retraction involves the use of a retraction cord soaked in astringent fluid, most commonly aluminum and iron salts, to allow the marginal gingiva to reversibly dislocate apically and laterally and to permit the region of the gingival sulcus to drain. The mechanism of action of astringent is protein precipitation and inhibition of transcapillary movement of plasma proteins [2]. Astringent retraction agents reduce cellular permeability and drain gingival tissue, leading to its reversible recession. Protein precipitation and denaturation can cause local tissue damage [2][3][4][5]. The potential toxicity of aluminum chloride at concentrations > 10% has been demonstrated [6,7]. Ferric sulfate coagulates blood, but often hemorrhage recurs after removal of the retraction cord, and the opening of the gingival sulcus is less than what is seen when aluminum salts are used [8]; the authors have reported possible tissue damage caused by ferric sulfate [8,9].
The first signs of damage caused by the chemicalmechanical retraction procedure appear on the gingival tissue, and the resulting inflammation leads to an increase in the concentration of proinflammatory cytokines and immunoglobulins in saliva, as well as to structural changes in the tissue itself [10]. Gingival indices allow the numerical expression of the resulting changes and objective evaluation of the periodontal condition. Proinflammatory cytokines are associated with oral tissue destruction, proteinase induction, and bone decomposition, and their increased production has been observed in numerous oral diseases [11,12]. Monocyte chemoattractant protein 1 (MCP-1) is a proinflammatory chemotactic cytokine that can trigger different groups of leukocytes through interaction with specific receptors and can induce the formation of specific inflammatory infiltrates; thus, it can be considered a sign of newly developed inflammation [13,14]. Early detection of inflammation markers after standard dental procedures may help to prevent the occurrence of more severe periodontal damage.
The aim of this study was to test the influence of the chemical-mechanical gingival retraction procedure on gingival bleeding index values and the salivary MCP-1 concentration as an indicator of inflammatory changes in the gingiva.
Subjects
The study included 60 subjects of both sexes, nonsmokers, aged 20-40 years, with no systemic diseases and with a completely rehabilitated oral cavity. All subjects were examined and rehabilitated by the periodontist before the intervention, so that there were no inflammatory changes on the supporting dental tissues before examination onset. The subjects were divided into 2 experimental groups (G1 and G2) based on the need for tooth preparation and making artificial crowns. Each group was further divided into 2 subgroups (R1 and R2) [15] according to the type of the gingival retraction agent used (Table 1). Sample size was calculated using the commercial statistical program G*Power for two-way null hypothesis testing and the F test and ANOVA, respectively. The following parameters were specified: probability of type 1 error α = 0.05 and strength of the study of 0.8. With such initial parameters and based on the publication by DI Venere et al. [16], a minimum sample size of at least 8 subjects per subgroup of both study groups was obtained.
Methods
Determining the time and extent of bleeding, GBI numerically indicated the activity of the inflammatory process in the gingiva. Testing was performed by probing the gingival sulcus with a bluntended periodontal probe. The intensity of resulting bleeding was scored based on the behavior of the gingiva after probing: 0, no bleeding; 1, bleeding 10-30 s after probing; 2, bleeding during gingival probing; and 3, spontaneous bleeding of the gingiva. The examined index was determined by the same periodontologist before the retraction procedure as well as in other predicted retraction periods.
MCP-1 concentrations were determined using the human CCL2/MCP-1 Quantikine ELISA kit (sensitivity 10 pg/mL). Saliva samples were centrifuged at 10,000 rpm for 5 min. The separated supernatant was frozen at -80 ° C until analysis. Clinical Procedure Subjects in the G1 group had been indicated for making an artificial crown, i.e., the preparation of 1 tooth, which precedes the retraction procedure. The preparation demarcation was located 0.25-0.5 mm below the gingival level, with maximum preservation of the gingival tissue integrity. Tooth preparation was performed atraumatically by the same type of dental bur and by 3 trained therapists, thus reducing the possibility of gingival inflammation due to mechanical damage to the gingiva. Since there was no tooth preparation in group G2, the chemical retraction method was demonstrated on the upper left central incisor.
The chemical-mechanical method of gingival retraction involved the application of a retraction cord (Elite Cord, Zhermack SpA, Italy) of the appropriate diameter, impregnated with R1 or R2, into the gingival sulcus of the reference tooth for 5 min. The retraction cord was atraumatically pressed by the same therapist along the entire scope of the tooth using a plastic instrument. The entire study period involved 3 observation periods: prior to (T0), and 24 h (T1) and 72 h (T2) after the chemical-mechanical retraction procedure. The first observation period (T0) in G1 was related to the time before tooth preparation. In each observation period, GBI was determined for the reference tooth, and a sample of nonstimulated saliva was collected into a sterile tube. Given that the test parameters in all subjects were determined prior to and after the retraction procedure, all samples collected before the treatment were considered as controls. Clinical procedures for making artificial crowns on prepared teeth followed the study period.
Statistical Analysis
The data were processed using the software for statistical data processing SPSS 15.0. The Mann-Whitney U test was used to assess significant differences (p) of continuous variables between 2 independent subject groups. A value of p < 0.05 was considered statistically significant. Changes in the arithmetic mean of the variables measured in the 3 observation periods in the study groups were analyzed using ANOVA for repeated measures (RM ANOVA). The effects of changes were defined by the values of the partial eta squared (ηp 2 ) with the effect defined as "small" for parameter values > 0.01, "medium," for values > 0.06, and "large" for values > 0.14. Table 2 shows a statistically significant increase in GBI values (p < 0.001) 24 h after the application of the gingival retraction agent in both experimental groups compared to the values at the beginning of the study. After 72 h, the values were lower compared to the second observation period, but they were still statistically significantly higher compared to those at the study onset (p < 0.001).
Results
The influence of the type of the gingival retraction agent on GBI in the experimental groups during the entire study period is shown in Table 3. Within both groups, G1 and G2, a statistically significant increase in GBI values was found for both tested retraction agents and for all subjects during the entire study period (p < 0.001). The gingival retraction agents showed major effects on the GBI value. In both groups, the effect size was greater with the use of the aluminum chloride-based gingival retraction agent (R1). Testing the effects between subgroups (between participant effects) showed that they did not differ statistically significantly in GBI values over the entire follow-up period. Changes in GBI values during the study period were similar in both groups of subjects (Fig. 1). One day after the retraction procedure, both experimental groups experienced a statistically significant increase in the salivary concentration of MCP-1 (p < 0.001). After 72 h, the salivary concentration of MCP-1 was decreased compared to the second observation period, yet it remained statistically significantly higher than prior to the application of the gingival retraction agent (p < 0.001) ( Table 4).
The effect of the gingival retraction agent type on the salivary concentration of MCP-1 during the study period in groups G1 and G2 is shown in Table 5. Within these groups, a statistically significant increase in MCP-1 val-ues was observed over the entire study period (p < 0.001) for both retraction agent types and for all subjects, with a great effect of the retraction agent. Testing the effect of the retraction agent types between subgroups showed that they did not differ statistically significantly in MCP-1 concentrations throughout the study period, with a negligible effect of the gingival retraction agent used.
Changes in MCP-1 concentrations during the study period occurred in a statistically significantly different manner depending on the retraction agent type only in the group of subjects with nonprepared teeth (p < 0.001), with a great effect of the retraction agent. In group G1, changes in MCP-1 concentrations did not occur in a statistically significantly different manner in the subgroups (Fig. 2).
Discussion
The study started from the assumption that the widely used chemical-mechanical procedure of gingival retraction during the production of fixed prosthetic compensations may damage the treated gingival tissue and may cause an acute inflammatory reaction, as evidenced by the increase in the GBI value, which serves as a clinical parameter, as well as the salivary concentration of MCP-1, a marker of inflammation. The results of the study indicate reversible damage to the gingival tissue after the application of both retraction agents studied. The clinical observation of the gingival tissue in the subjects revealed a mild-to-moderate inflammation 1 day after the retraction procedure, which resulted in bleeding during gingival sulcus probing in all subjects in the prepared teeth group as well as after the application of ferric sulfate in subjects with nonprepared teeth. Three days after the gingival retraction, inflammation was reduced, and bleeding after probing was mild in group G1, while in group G2 it was less. These results are in agreement with the results of other authors, who demonstrated the recovery of oral tissue after retraction procedures [17,18]. Higher GBI values in subjects with prepared teeth indicated a certain mechanical tissue injury that contributed to the inflammatory response of the gingiva [19].
Proinflammatory cytokines play important roles in triggering and maintaining inflammatory and immune responses [10]. The chemical-mechanical retraction method led to an increase in the MCP-1 concentration in the subjects of both experimental groups, which is in a positive correlation with the results of the clinical parameter monitored. A decrease in MCP-1 with time proved the reversibility of the resulting changes. The type of the retraction agent used did not affect the measured MCP-1 concentration, but its mean values were higher in group G1, which emphasizes the importance of tooth preparation in the inflammatory reaction observed in the gingival tissue.
Our results are consistent with the results of other studies published to date and indicate that increased MCP-1 secretion is an indicator of periodontal damage [20][21][22]. Garlet et al. [21] demonstrated that MCP-1 in diseased gin- gival tissue supports the maturation of monocytes into macrophages, whose role is to destroy pathogens and secrete proinflammatory mediators. Macrophage-released products such as IL-1 and TNFα, in addition to contributing to the inflammatory reaction, also trigger bone decomposition [22]. Therefore, the chemotactic action MCP-1 on monocytes and macrophages can support chronic inflammatory responses and bone loss present in periodontopathy [12]. In the gingival fluid of patients with aggressive and chronic periodontitis, MCP-1 levels were higher than in healthy subjects [23,24]. In a study by Pradeep et al. [12], a proportionate decrease in MCP-1 levels was noted in the gingival fluid after periodontal therapy.
To date, the results of the potential iatrogenic effect of the chemical-mechanical retraction procedure have been obtained in in vitro and in vivo studies in experimental models, whereas studies in clinical conditions are insufficient and, at the same time, necessary from the professional point of view. The study of clinical parameters of gingival damage and the determination of the salivary concentration of the reference proinflammatory cytokine in a homogeneous group of patients have given an exact answer regarding potential side effects of the widely used chemical-mechanical method of gingival retraction. The right choice of clinical procedures and therapeutic agents diminishes iatrogenic damage that would compromise the effect of prosthetic therapy and reduce the durability of fixed restoration.
Conclusion
The bleeding index values and the salivary concentration of MCP-1 increased statistically significantly after the chemical-mechanical gingival retraction proce-dure, with a tendency to decrease over time, which indicated the reversibility of the resulting changes. Higher values of the studied parameters were observed in the group of subjects with prepared teeth, and clinical changes were more pronounced after the use of ferric sulfate, although no statistically significant difference was found.
|
2020-04-03T19:18:52.226Z
|
2020-04-02T00:00:00.000
|
{
"year": 2020,
"sha1": "75ef385ba75d4be5ceac11dfaf90e7f632b0eebf",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/506878",
"oa_status": "GOLD",
"pdf_src": "Karger",
"pdf_hash": "d419267225778c422ff38ec061b55a7ba12e4bef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6711607
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of the O-antigen Polymerase (Wzy) of Francisella tularensis*
The O-antigen polymerase of Gram-negative bacteria has been difficult to characterize. Herein we report the biochemical and functional characterization of the protein product (Wzy) of the gene annotated as the putative O-antigen polymerase, which is located in the O-antigen biosynthetic locus of Francisella tularensis. In silico analysis (homology searching, hydropathy plotting, and codon usage assessment) strongly suggested that Wzy is an O-antigen polymerase whose function is to catalyze the addition of newly synthesized O-antigen repeating units to a glycolipid consisting of lipid A, inner core polysaccharide, and one repeating unit of the O-polysaccharide (O-PS). To characterize the function of the Wzy protein, a non-polar deletion mutant of wzy was generated by allelic replacement, and the banding pattern of O-PS was observed by immunoblot analysis of whole-cell lysates obtained by SDS-PAGE and stained with an O-PS-specific monoclonal antibody. These immunoblot analyses showed that O-PS of the wzy mutant expresses only one repeating unit of O-antigen. Further biochemical characterization of the subcellular fractions of the wzy mutant demonstrated that (as is characteristic of O-antigen polymerase mutants) the low molecular weight O-antigen accumulates in the periplasm of the mutant. Site-directed mutagenesis based on protein homology and topology, which was carried out to locate a catalytic residue of the protein, showed that modification of specific residues (Gly176, Asp177, Gly323, and Tyr324) leads to a loss of O-PS polymerization. Topology models indicate that these amino acids most likely lie in close proximity on the bacterial surface.
Francisella tularensis is a highly infectious Gram-negative facultative intracellular pathogen that causes tularemia, an often fatal zoonotic disease (1)(2)(3)(4). Considered a potential biological weapon (5), F. tularensis is classified as a CDC category A select agent because of its capacity to disseminate by aerosol and to induce severe morbidity and high mortality. F. tularensis encompasses four closely related subspecies: tularensis, holarctica, mediasiatica, and novicida (6 -8). F. tularensis subsp. tularensis (type A) is highly virulent to many mammalian species, including humans, and is found predominantly in North America. F. tularensis subsp. holarctica (type B), which is isolated primarily in Europe and northern Asia, causes a milder infection in humans but is still highly virulent in mice. A live attenuated vaccine strain, LVS (live vaccine strain), was derived from F. tularensis subsp. holarctica (5), but its use has not been licensed in the United States because of untoward reactions and an incomplete understanding of the precise mechanism of attenuation.
In Gram-negative bacteria, an important component of the outer membrane is the lipopolysaccharide (LPS) or endotoxin. LPS comprises three chemically linked components: lipid A, a hydrophobic glycolipid that is integral to the membrane; core polysaccharide, a nonrepeating oligosaccharide located at the membrane surface that is ketosidically linked to an N-acetylglucosamine disaccharide residue of lipid A; and O-antigen, a polysaccharide made up of repeating units of sugars extending from the membrane surface into the environment (9). The O-antigen structure varies with the bacterial species, but in most Gram-negative bacteria, each repeating unit is composed of up to eight sugars linked glycosidically to each other and to the core polysaccharide. O-antigens are responsible for many of the properties of Gram-negative bacteria, including serologic specificity, interactions with the innate immune system, and bacteriophage attachment. The structure of LPS from F. tularensis has been well described (10 -14). The O-antigen of F. tularensis strains LVS and SchuS4 contains the rare sugars 2-acetamido-2,6-dideoxy-D-glucose (QuiNAc) 2 and 4,6dideoxy-4-formamido-D-glucose (Qui4NFm) as well as 2 mol of 2-acetoamido-2-deoxy-D-galacturonamide (GalNAcAN); the resulting repeat structure is 4-␣-GalNAcAN-1,4-␣-GalNAcAN-1,3--QuiNAc-1,2--Qui4NFm.
The three known modes of O-antigen assembly (Wzy-dependent, ABC transporter-dependent, and synthase-dependent) rely on export mechanisms and the presence of specific enzymes in a given pathway (15). In the Wzy-dependent pathway ( Fig. 1), O-repeating unit synthesis is initiated from two different classes of integral membrane proteins: N-acetylhexosamine-1-phosphate transferase (PNPT) and polyisoprenyl-phosphate hexose-1-phosphate transferase. These proteins catalyze the transfer of sugar nucleotides to undecaprenol phosphate (16 -18). The O-antigen subunits are then assembled from undecaprenol phosphate-sugar by serial actions of glycosyltransferases at the cytoplasmic face of the inner mem-* This work was supported, in whole or in part, by National Institutes of Health brane (19) and are transported to the periplasmic face by the protein Wzx (flippase). Assembly and polymerization of the subunits by Wzy (O-antigen polymerase) and subsequent ligation to the lipid A core by WaaL occur at the periplasmic portion of the inner membrane (20). Finally, the entire structure is translocated to the outer membrane (21,22) with the help of several essential proteins, including LtpA and Imp (23)(24)(25). Two genes in the F. tularensis LVS O-antigen gene cluster encode proteins whose high degree of sequence similarity to Wzy and Wzx suggests that O-antigen is transported and polymerized via the wzy-dependent pathway.
The O-antigen gene cluster encoding the enzymes for the biosynthesis of nucleotide sugars and the glycosyltransferases for O-unit assembly have been found in many bacteria (16, 26 -28). The F. tularensis putative O-antigen biosynthetic gene cluster has been identified (12); it is estimated to be ϳ17 kb in length and is predicted to contain 15 genes involved in O-antigen biosynthesis (12). Deletion of any single gene within the locus might result in significant attenuation of the organism's virulence (29 -31) because an incomplete LPS would probably be synthesized. Because most of the genes within the cluster have been assigned putative functions on the basis of sequence similarity with genes in O-antigen biosynthetic clusters from other bacteria (26,32), additional studies are required to elucidate the actual role of each protein.
The O-antigen polymerase (Wzy) is responsible for adding oligosaccharide repeating units of the O-antigen to the LPS core region. Although Wzy is essential for O-polysaccharide synthesis, amino acid residues that are critical to the function of this protein have not been identified. The Wzy proteins from several bacterial species have been identified, and all are predicted to be integral membrane proteins with 11-13 transmembrane domains. Even at the amino acid level, these proteins show little similarity to each other in terms of primary sequence (27,33). The absence of conserved regions has complicated the identification of catalytic and binding residues in Wzy; consequently, the mechanism of its activity has not been unraveled. Crystallographic resolution of the structure of Wzy has not been reported, perhaps because of the difficulty of defining structure in proteins that are so highly integrated into membranes. This study describes the functional and biochemical characterization of the putative Wzy in F. tularensis LVS and the identification of specific amino acid residues that play an important role in maintaining this protein's catalytic function.
EXPERIMENTAL PROCEDURES
Bacterial Strains and Plasmids-Bacterial strains and plasmids used in this study are summarized in supplemental Table 1. F. tularensis LVS (kindly provided by Dr. Karen Elkins, United States Food and Drug Administration, Bethesda, MD) was grown either in modified Muller-Hinton broth (Difco) supplemented with ferric pyrophosphate and IsoVitaleX (BD Biosciences) or in cysteine heart agar supplemented with 1% hemoglobin (CHAH) for 72 h at 37°C in 5% CO 2 . Escherichia coli strain DH5␣ was used for cloning experiments, and Shigella flexneri 2␣ (kindly provided by Dr. Marcia Goldberg, Massachusetts General Hospital, Boston, MA) was grown either in Luria-Bertani (LB) broth (Difco) or on LB agar plates. Ampicillin was added to a final concentration of 100 g/ml and kanamycin to a final concentration of 50 g/ml in LB medium and 10 g/ml in CHAH if antibiotic selection was necessary.
DNA Methods-Plasmids were introduced into F. tularensis LVS by electroporation (34). Miniplasmids were prepared for E. coli DH5␣ and F. tularensis LVS with a QIAprep spin miniprep kit (Qiagen, Hilde, Germany). Restriction enzymes and DNA-modifying enzymes were purchased from New England Biolabs (Beverly, MA) and were used as recommended by the manufacturer. Standard molecular cloning and transformation procedures were employed (35).
Construction of the Putative wzy Mutant of F. tularensis LVS-Construction of the wzy mutant of F. tularensis LVS was based on allelic exchange of the target gene by homologous recombination (36,37). To construct a suicide plasmid for the recombination event, two fragments (fragments 1 and 2, each consisting of ϳ1,000 bp of up-and downstream regions of the putative wzy gene) were amplified by PCR with two pairs of primers. For fragment 1, the forward primer was wzYm_F1 (5Ј-ACCCCCGGGATAAATCTGAGTTTGAAAAAG-3Ј; XmaI site underlined), and the reverse primer was wzYm_R1 (5Ј-TGTGGTACCTTAATTAATTAAATACCACTC-3Ј; KpnI site underlined). For fragment 2, the forward primer was wzYm_F2 (5Ј-ACCGGTACCATTTGAAAAAGGTTTGTA-CAT-3Ј; KpnI site underlined), and the reverse primer was wzYm_R2 (5Ј-TGTGAATTCATAAGTGATATAACCG-TAATA-3Ј; EcoRI site underlined). The two amplified frag- ments were subsequently ligated into the XmaI and KpnI sites (fragment 1) and into the XmaI and EcoRI sites (fragment 2) of pEX18.Kan to yield pTH29. The sacB gene, which was used as a counterselection marker, was amplified from pPV (38) and inserted into the PstI site of pTH29 to compensate for the low expression of sacB in pEX18.Kan; the final result was pTH30. For the first homologous recombination, suicide plasmid pTH30 was introduced into the target bacteria by electroporation, as described previously (34). After electroporation, the transformed cells were resuspended in 1 ml of tryptic soy broth supplemented with cysteine, incubated at 37°C without antibiotics on a rotary shaker until an A 620 of ϳ0.6 was reached, and then plated onto CHAH supplemented with 10% sucrose to induce a second recombination. Individual sucrose-resistant, kanamycin-sensitive colonies were selected as the mutant candidates. PCR was used to screen and select mutants, and the selected mutants were confirmed by genomic sequencing as described previously (39).
Construction and Expression of the Wzy-GFP Fusion Protein in F. tularensis LVS-Site-directed mutant wzy genes in each plasmid were amplified by PCR with forward primer wzy_ ERI_F (5Ј-ACCGAATTCATGTACATAAAAAAAGTG-3Ј; EcoRI site underlined) and reverse primer wzy_ERI_R (5Ј-TGTGAATTCCCAAATCACACTCCTAGT-3Ј; EcoRI site underlined), and ligated into the EcoRI site of pTH32, which yielded F. tularensis LVS ompA promoter-controlled Wzy-GFP fusion proteins (P ompA -wzy-GFP). The constructs were introduced into F. tularensis LVS, and each transformant was subcultured and resuspended with MH broth to normalize the A value (A 620 ϭ 1.0). The intensity of GFP fluorescence was measured with a fluorometer (Fluoroskan Ascent, Theomo Electronic Corp., Waltham, MA) at wavelengths of 485 nm (excitation) and 520 nm (emission).
Immunoblot Analysis-Whole-cell lysates of wild-type and mutant F. tularensis LVS were suspended in 1ϫ Laemmli sample buffer (Bio-Rad), heated at 100°C for 10 min, and subjected to SDS-PAGE on a 4 -20% gradient gel. For immunoblot analysis, the bands were transferred overnight onto an Immobilon-P transfer membrane (Millipore, Billerica, MA) in Towbin buffer (25 mM Tris-HCl, 192 mM glycine, 20% methanol). After transfer, the membranes were blocked in 5% alkali-soluble casein (Novagen, Madison, WI) and probed with primary antibodies at a dilution of 1:3,000 in 1% casein (Novagen) at room temperature for 1 h. Monoclonal antibody (mAb) ab2033 was purchased from Abcam Inc. (Cambridge, MA). The rabbit polyclonal antiserum to F. tularensis (␣-F. tularensis LVS serum) used in this study was generated with the Classic-Line protocol of Lampire Biological Laboratories (Pipersville, PA). The appropriate secondary antibodies conjugated to alkaline phosphatase were used at a dilution of 1:3,000 in 1% casein and incubated for 1 h at room temperature. The blots were developed with an alkaline phosphatase detection kit (Novagen) according to the manufacturer's protocol.
Complementation Analysis-To express the putative wzy of F. tularensis LVS under the influence of the groEL promoter (P groEL ), the 238-bp P groEL was amplified by PCR with the following primer pairs: PgroEL RBS _F (5Ј-ACCGGTACCTATAC-CCTTCAAGCTTTG-3Ј; KpnI site underlined) and Pgro-EL RBS _R (5Ј-TGTGAATTCGCGTTAACAATCTTACTCCT-TTG-3Ј; EcoRI site underlined). The amplified groEL promoter fragment was ligated into the KpnI and EcoRI sites of broad host range shuttle plasmid pFNLTP6 to yield pTH17. Subsequent amplification of 1,230 bp of the putative wzy gene was carried out with primers wzyC_F (5Ј-ACCGAATTCATGTA-CATAAAAAAAGTG-3Ј; EcoRI site underlined) and wzyC_R (5Ј-TGTGGATCCTCAAATCACACTCCTAGTTGT-3Ј; BamHI site underlined). The amplified putative wzy fragment was then ligated into the EcoRI and BamHI sites of pTH17 to obtain pTH21. For complementation studies, recombinant plasmid pTH21 was introduced into the mutant strain under standard electroporation conditions (34), and transformants were selected on CHAH supplemented with 2% bovine hemoglobin and kanamycin (10 g/ml) after incubation at 37°C as described above. The clones harboring pTH21 were confirmed by DNA sequencing and were subsequently used for immunoblot analysis to confirm complementation. The other complementation study used wzy of S. flexneri 2␣ and was conducted according to the same scheme. The 1,149-bp S. flexneri 2␣ wzy gene was amplified with primers Shi_wzy_F (5Ј-ACCGAAT-TCATGAATATAAATAAAATT-3Ј; EcoRI site underlined) and Shi_wzy_R (5Ј-TGTGGATCCTTATTTTGCTCCAGAA-GTGAG-3Ј; BamHI site underlined), and the amplified fragment was subsequently ligated into pTH17 to yield pTH33. Plasmid pTH33 was also electroporated into the F. tularensis LVS wzy mutant strain for further immunoblot analysis.
Immunoelectron Microscopy-Negative contrast immune electron microscopy was performed as previously described (40). Bacterial cells grown on CHAH plates were labeled with primary antibody (mAb ab2033) and rabbit anti-mouse bridging antibody at final dilutions of 1:20 and 1:100, respectively. Secondary antibody tagged with 15 nm of protein A gold (Cell Microscopy Center, University Medical Center, Utrecht, The Netherlands) was used at a final dilution of 1:70. All images were recorded at a primary magnification of ϫ18,500.
LPS Purification-LPS was purified from the F. tularensis LVS wzy mutant strain by a modification of the hot phenolwater method (41). First, cells were inoculated from CHAH plates into five 250-ml vented cap baffled flasks containing 100 ml of tryptic soy broth with 0.1% cysteine, 0.025% ferric pyrophosphate, and 0.1% antifoam 204 (all from Sigma-Aldrich). After incubation overnight at 37°C and 200 rpm, these "seed" cells were inoculated into five 3-liter vented cap baffled flasks containing 2 liters of the broth with additives just described, one 250-ml flask per 3-liter flask, and this preparation was incubated at 37°C and 200 rpm for 72 h. Cells were harvested by centrifugation and were frozen at Ϫ80°C until used. For LPS purification, a hot solution of fresh 50% phenol was added to thawed cells (ratio, 1 g of cells to 5 ml of phenol solution). This preparation was mixed first for 2 h at 68°C with sterile glass beads and an overhead mixer and then on a stir plate overnight at 4°C. Cell debris and phenol were removed by centrifugation (6000 ϫ g for 20 min at 4°C) in Teflon FEP centrifuge bottles (Nalgene, Rochester, NY). The top aqueous phase was separated and diluted with one volume of water and two volumes of ether to remove additional phenol. The solution was mixed vigorously for 10 min in a separatory funnel and then allowed to separate overnight at room temperature. The bottom aqueous phase containing LPS was decanted. Residual ether was removed from the collected aqueous phase using a rotary evaporator, and the volume was reduced by lyophilization. The sample was then thawed and dialyzed against water before enzyme treatment. Nucleic acid and protein were degraded by sequential treatment with DNase, RNase, and Pronase (Worthington). The solution was collected and clarified by low speed centrifugation (5000 ϫ g overnight at 4°C). The supernatant was lyophilized to dryness and resuspended in PBS before further purification on a Sephacryl 300-HR 26/100 column (GE Healthcare). For LPS detection, refractive index-positive samples were screened for LPS in a competitive inhibition ELISA with wildtype LPS as the coating antigen and mAb ab2033. Fractions containing LPS were pooled and dialyzed against water. A portion of the solution was lyophilized to dryness, and the LPS concentration was determined.
Preparation of Membrane Fraction-The method used for preparation of the periplasmic portion of wild-type F. tularensis LVS and the wzy mutant was modified on the basis of previous descriptions (42,43). The wild-type and mutant strains were cultured in 2-liter baffled flasks until an A 620 of ϳ1.7 (stationary phase) was reached. Cultures were subjected to centrifugation at 8,000 ϫ g for 30 min at 10°C for collection of cells, and the resulting cell pellets were washed with phosphate-buffered saline. Cell pellets were suspended in 35 ml of 0.75 M sucrose (in 5 mM Tris, pH 7.5) and were transferred to a sterile flask with a stir bar. A 70-ml volume of 10 mM EDTA (in 5 mM Tris, pH 7.8) was slowly added. After incubation for 30 min at room temperature, lysozyme was added slowly to a final concentration of 200 g/ml. After further incubation for 30 min at room temperature, the suspensions were centrifuged at 7,500 ϫ g for 30 min at 10°C. The supernatant was dialyzed for 2 days in sterile deionized water to remove sucrose. After dialysis, the supernatant fractions that contained periplasm and outer membrane fractions were centrifuged at 210,000 ϫ g (ϳ40,000 rpm in an Optima TM LE80K, Beckman Coulter, Fullerton, CA) for 2 h at 4°C for separation of the periplasmic (supernatant) and the outer membrane (pellet) fractions. The fractionated periplasmic portions were concentrated by lyophilization (Lyo-center, VirTis, Gardiner, NY) for further study.
Site-directed Mutagenesis-In vitro site-directed mutagenesis was performed with the QuikChange site-directed mutagenesis kit (Stratagene, La Jolla, CA), which was used according to the manufacturer's instructions. The plasmid pTH21 was used as a template for mutant strand synthesis, and newly synthesized mutated plasmids were introduced into E. coli XL-10-Gold competent cells (Stratagene) for nick repair. Mutagenesis was confirmed by sequence analysis. After sequence verification, the plasmids harboring mutated Wzy residues were introduced into the F. tularensis LVS wzy mutant strain, and complementation of O-PS expression was assessed.
Analysis of F. tularensis Putative O-antigen Polymerase
(Wzy)-The putative O-antigen biosynthetic gene cluster of F. tularensis is estimated to be ϳ17 kb in length and has been predicted to contain 15 genes involved in O-antigen biosynthe-sis (12). All of the genes within the cluster were assigned putative functions on the basis of sequence similarity with genes from O-antigen biosynthetic clusters in other Gram-negative bacteria. The putative O-antigen polymerase, located at the 7th position in the cluster, showed 53.5% similarity with that of Pseudomonas aeruginosa PA 103 (accession number AAD45264), 50.8% similarity with that of Salmonella enterica subsp. enterica serovar Typhi strain CT18 (accession number NP_456180), and 48.7% similarity with that of Shigella dysenteriae Sd197 (accession number YP_403785) by ClustalW alignment (45).
The O-antigen polymerases of other bacterial species are generally extremely hydrophobic and have Ն11 putative membrane-spanning domains (27,28,33). The deduced number of membrane-spanning helices in the putative O-antigen polymerase of F. tularensis LVS was 11. Hydropathy plotting (46) of the O-antigen polymerase of F. tularensis LVS showed a high degree of hydrophobicity similar to that of other bacterial polymerases (data not shown).
It has been reported that the G ϩ C content of the coding region of O-antigen polymerase is generally lower than that of the bulk DNA of the bacterium (27,47). Indeed, the deduced G ϩ C content of the putative O-antigen polymerase in F. tularensis LVS was 23%, a value lower than that of the bacterium itself (33.2%) (12).
Analysis of codon usage in the putative wzy gene showed a higher rate of use of "rare" or modulator codons as defined by Grosjean and Fiers (48) than in other genes (27). The rare codons are CUA (leucine), AUA (isoleucine), AGA/AGG/ CGA/CGG (arginine), and GGA/GGG (glycine), and the rate of the rare codon use for the putative wzy gene in F. tularensis LVS was 10.7%; this value represents high frequency rare codon use similar to that documented for other bacterial wzy genes (e.g. S. dysenteriae, 9.2%; S. enterica M40, 10.9%; S. enterica M32, 12.5%).
Construction and Characterization of wzy Mutant-F. tularensis LVS wzy mutation was generated as described previously (36, 37) (see "Experimental Procedures"). The non-polar nature of the wzy mutant was confirmed by PCR and genomic DNA sequencing (data not shown). The expression of outer membrane O-PS in the wild-type and wzy mutant strains was investigated by immunoblot analysis. Whole-cell lysates were prepared from wild-type F. tularensis LVS, a previously described wbtA mutant (31), and the wzy mutant. WbtA is a dehydratase that is presumably necessary for the synthesis of ␣-D-QuiNAc, a sugar required for O-antigen subunit assembly. The wbtA mutant produces lipid A and core polysaccharide but cannot produce O-PS. Lysates of the LVS strain and its wbtA and wzy mutants were compared by immunoblot analysis with monoclonal antibody specific for the O-antigen of F. tularensis LVS (ab2033) ( Fig. 2A, a); as a control to ensure that the quantities of lysate present were sufficient for visualization of O-PS should it exist, we also stained the wzy mutant and the wbtA mutant with polyclonal antiserum to LVS ( Fig. 2A, b). A characteristic laddering pattern of O-PS was seen when wild-type F. tularensis LVS was examined ( Fig. 2A, a); the laddering bands were dense enough to obscure the visualization of any protein in the wildtype preparation. In contrast, an accumulation of only low molecular size O-antigen was observed in the whole-cell lysate from the wzy mutant strain ( Fig. 2A, a), and, as has been previously reported (31), no O-antigen was seen with the wbtA mutant ( Fig. 2A, a). Probing with antiserum to whole F. tularensis LVS revealed distinct protein bands in the wzy mutant lane that were similar to those observed with the wbtA mutant ( Fig. 2A, b).
Complementation of the wzy Mutant-To demonstrate that the abrogation of O-antigen polymerization in the wzy mutant strain was due solely to inactivation of the wzy gene, we complemented the phenotype by introducing an intact copy of the wzy gene in trans. Immunoblot analysis of the complemented wzy mutant demonstrated the typical ladder pattern of LPS, with complete rescue of O-antigen (Fig. 2B, lane 4). This result indicated that Wzy participates in the multienzyme complex biosynthetic pathway for O-antigen polysaccharide in F. tularensis LVS. Complementation of the wzy mutant F. tularensis strain with wzy of S. flexneri 2␣ was unsuccessful (Fig. 2B, lane 5), probably because of the low degree of sequence identity between the wzy genes of these two strains (16.1%). (Fig. 2C, c) and on the wild-type LVS strain (Fig. 2C, a). However, as shown previously (31), the wbtA mutant (Fig. 2C, b) had no O-PS on its surface. These are consistent with the hypothesis that the wzy mutant synthesizes and attaches only one repeating unit of O-PS to the core.
Membrane Fractionation of Putative O-polymerase Wzy and Analysis of the O-antigen Polysaccharide Expression Pattern-
In the wzy-dependent O-PS biosynthetic pathway, polymerization of O-antigen subunits by Wzy occurs at the periplasmic side of the inner membrane after newly synthesized O-antigen subunits are translocated from the cytoplasm into the periplasmic region by Wzx (Fig. 1). To further examine the O-PS expression pattern of both the wzy mutant and wild-type strain at the subcellular level, periplasmic and outer membrane fractions of both the wzy mutant and the wild-type strain were isolated, and the expression of O-PS in each fraction was examined by immunoblot analyses using O-antigen-specific mAb ab2033 (Fig. 3). The purity of the subcellular fractions was confirmed by mass spectrometry (Taplin Mass Spectrometry Facility, Harvard Medical School, Boston, MA). Briefly, the isolated fractions that were analyzed by SDS-PAGE and the protein bands (including the bands that have the same molecular weight but were isolated from the different fractions) that were analyzed by MS from each fraction were determined to identify the proteins in there and to predict their subcellular location (see supplemental Figs. 1 and 2). Isolated fractions were subjected to SDS-PAGE, and the separated protein bands were analyzed by mass spectrometry to confirm the expected location of subcellular marker proteins identified in the fraction. Immunoblots of the periplasmic fractions (Fig. 3, lanes 4 and 5) showed polymerization of wild-type LPS and the presence of one repeating unit of O-antigen in the wzy mutant. Thus, the wzy mutant fails to polymerize the O-PS in the periplasm of the cell, a finding consistent with the characterization of this gene as an O-antigen polymerase. By comparing the density of the Western blots with purified known concentration standards of the LPS from LVS and from the LVS wzy mutant, it was determined that there is ϳ2.6-fold more O-antigen in the outer membrane fraction of LVS (ϳ80 pg/2-liter culture) than the wzy mutant strain (ϳ30 pg/2-liter culture) and ϳ2.4-fold more O-PS in the periplasmic fraction of wzy mutant (ϳ60 pg/2-liter culture) than parental LVS strain (ϳ25 pg/2-liter culture) (data not shown; the amount of LPS in each fraction had been normalized to A 600 ϭ 1.7, 2-liter culture).
Identification of Putative Catalytic Residues That Maintain the Function of Wzy-To identify catalytic residues that might be responsible for the polymerase function of Wzy, amino acid sequences of the O-antigen polymerases in various bacteria, including F. tularensis LVS, were aligned by ClustalW (45), and topology maps were made. All of the topology maps were based on the TMHMM server V.2.0 (available on the World Wide Web), the previously published topology map of Shigella (33), and other prediction programs (SOSUI, TopPred 2, and TMpred, available on the World Wide Web).
Alignment of the protein sequences from other bacteria showed that there were no obvious conserved regions of O-antigen polymerase. Therefore, we compared the Wzy protein sequences from bacteria originating from the same phylogenetic tree (for F. tularensis LVS, the ␥ subdivision of Proteobacteria) (Fig. 4A). Multiple alignment of eight other evolutionarily related O-antigen polymerases revealed that a Gly 323 is the only absolutely conserved residue among bacterial sequences examined (Fig. 4A). On the basis of the alignment results, we chose 11 candidates for the generation of site-specific mutations by substitution of amino acid residues. Plasmids with single site-specific amino acid substitutions in F. tularensis LVS wzy were generated (see "Experimental Procedures"), and their ability to restore the expression of O-antigen in the wzy mutant was examined. The failure of the plasmid harboring the site-specific amino acid substitution to restore the expression of the O-antigen would suggest a critical role for the amino acid. Among the 11 site-specific mutations created, we observed that the substitution of Gly 323 with glutamic acid, serine, and arginine (Fig. 4C, b) resulted in loss of the protein's polymerization function. We also observed the same results in substitutions with proline, tyrosine, leucine, glutamine, and aspartic acid (data not shown). The TMHMM server and the previously published topology map of Shigella (33) indicated that Gly 323 is located in the periplasm. Comparison of the topology of the O-antigen polymerase in various Gram-negative bacteria predicted the presence of 4 or 5 peptide loops in the periplasm (Fig. 5). The second (or third) and fourth (or fifth) loops are generally the largest, and the fourth (or fifth) is usually larger than the second (or third). The fifth loop in F. tularensis (amino acids 268 -327) is composed of Ͼ50 amino acids and has abundant glycine residues (Fig. 4B). The presence of glycine-abundant loops in the periplasm of Wzy is interesting because glycine residues have been found to contribute to the flexibility of some loops associated with enzyme catalysis (50). Topology indicated that Gly 323 is located on the fifth loop, which is both the largest and most glycine-abundant.
Many active site residues are composed of amino acids with side chain charged groups, such as aspartic acid, glutamic acid, lysine, arginine, and histidine. Because polymerization is thought to occur in the periplasm, we focused our next group of experimental substitutions on the charged amino acids in the periplasmic loops of the protein. Seventeen randomly chosen candidate amino acids were mutated to alanine in an assay for O-PS expression complementation in the wzy mutant (Fig. 4C, a). Substitution of Asp 177 with other amino acids resulted in a functional defect of the Wzy protein (Fig. 4C, b). We found that Asp 177 is located on the third loop (amino acids 142-178), which is the second largest loop in the periplasm and also contains abundant glycine residues. Mutation of residues adjacent to Gly 323 and Asp 177 (Gly 176 , Gly 178 , and Tyr 324 ) revealed a contribution of Gly 176 to the catalytic performance of the protein (Fig. 4C, b). Similarly, we found that Tyr 324 (adjacent to Gly 323 ) plays a role in maintaining the function of the protein (Fig. 4C, b).
To ensure that the functional results described above were not the result of interference with Wzy expression, we determined the relative expression of Wzy in each mutant clone. Each Wzy clone was fused with a GFP reporter gene and introduced into F. tularensis LVS, and the expression level was measured by fluorometry (Fig. 6). F. tularensis LVS expressing only GFP without the Wzy (P-GFP) showed a ϳ125-fold increase in fluorescence compared with that of the promoterless negative control (wzy-GFP) or the parental bacteria, which does not harbor the plasmid (LVS). The intensity of the fluores-cence in bacteria expressing the intact Wzy (P-wzy-GFP) or mutagenized but still able to complement the O-PS ladder pattern (P-wzy(K152A)-GFP) was increased ϳ13-fold higher than the negative controls. The expression level of Wzy in the mutant clones, which were not able to complement the O-PS ladder pattern (P-wzy(G176E)-GFP, P-wzy(D177A)-GFP, P-wzy(G323E)-GFP, and P-wzy(Y324E)-GFP) were increased 10 -13-fold more than the negative control, indicating that Wzy in each mutant clone was expressed at nearly the same level as that of the intact Wzy.
DISCUSSION
The O-antigen polymerase (Wzy), an integral membrane protein with 11-13 transmembrane domains, is responsible for the polymerization of O-antigens (43) as well as capsules (51) and other cell surface polysaccharides (52). Studies of O-antigen polymerases of many bacteria are hindered by the absence of a conserved region within and between species, an inability to recombinantly express in vitro the protein because of its hydrophobicity, the higher rate of use of rare codons, and a weak ribosomal binding site (47). Differences in primary amino acid sequences are probably necessary because Wzy recognizes cognate O-units that usually differ significantly in structure. Although Wzy apparently mediates the formation of a typical glycosidic linkage between O-units, it displays no homology to known glycosyltransferases.
Some studies have partially characterized the O-antigen polymerase (27,47); however, this protein has received relatively little attention because most investigations have focused on the entire O-antigen biosynthetic locus, of which Wzy is only one component (12, 33,53). The synthesis of O-antigens involves two distinct polymerization models that are distinguished by the direction of chain elongation (54). One model entails growth at the reducing terminus and follows a wzz-and wzy-dependent polymerization pathway; the other entails growth at the nonreducing terminus and follows a wzz-and wzy-independent pathway. In the model of growth at the reducing terminus, polymerization involves the addition of nascent O-polymer from one undecaprenyl pyrophosphate carrier to the nonreducing terminus of a newly synthesized single O-unit attached to a second undecaprenyl pyrophosphate carrier (55). The presence of the putative Wzy and the complex structure of the O-antigen repeating unit in F. tularensis indicate that polymerization of this organism's O-antigen would follow the model of growth at the reducing terminus.
Feldman et al. (56) demonstrated that only a single undecaprenol phosphate-linked sugar is required for Wzx-mediated translocation from the cytoplasm to the periplasm (i.e. that the complete O-antigen subunit is not required for the activity of Wzx). Likewise, our electron microscopy study with immunogold-labeled LPS (Fig. 2C, c) showed that a single O-antigen unit also can be translocated from the periplasm to the outer membrane surface. This observation suggests that the protein involved in translocation from the periplasm to the outer surface does not require the complete set of polymerized O-antigen subunits.
It is interesting that we did not observe the typical modal distribution of O-PS molecular size in wild-type F. tularensis LVS. The modal size distribution of O-PS is thought to be dependent on the Wzz protein (22,32,57), the chain length determinant molecule. The absence of a Wzz homolog in F. tularensis LVS may explain why the O-PS of this strain does not follow the modal size distribution pattern of the LPS of many other bacteria expressing Wzz. Loss of modal size distribution of LPS has been reported in other organisms lacking Wzz (58).
It has previously been shown that O-antigen polymerization by wzy occurs in the periplasm (20) and that the polymerized O-antigen chain is translocated into the outer membrane. We have demonstrated that, after subcellular fractionation, O-antigens were present in the periplasmic region of both the wildtype and the mutant strain. However, in the wzy mutant, the O-PS was of very small molecular size and did not appear to be polymerized. In contrast to that for O-PS, the mechanism for translocation of synthesized capsular polysaccharide from the periplasm to the outer membrane has been well described (44,59,60).
As noted previously, difficulties in expressing these highly membrane-integrated proteins in vitro and the absence of a conserved motif among other O-antigen polymerases have made it difficult to elucidate the exact functional and biochemical mechanism of polymerization. On the basis of multiple alignments of hydrophobicity plots, Morona et al. (27) hypothesized that the periplasmic domain of the O-antigen polymerase is involved in bonding the O-antigen repeat units and in polymerizing them into the O-antigen chain. Moreover, McGrath and Osborn (49) reported that O-antigen polymerase is active on the periplasmic side of the cytoplasmic membrane. Studying the predicted topology of O-antigen polymerases, we observed that usually two relatively large peptide loops exist in the periplasm and that these loops have a tendency to include more glycine residues than do other loops (Fig. 5). We have now
|
2017-09-16T18:39:42.370Z
|
2010-07-06T00:00:00.000
|
{
"year": 2010,
"sha1": "59de10c99787d19b8ace281b2b399df52ff7b168",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/285/36/27839.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4c9d653690cb0c127f11bc47c288b3b5ff6cb18e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
225238834
|
pes2o/s2orc
|
v3-fos-license
|
Product Qualification as a Means of Identifying Sustainability Pathways for Place-Based Agri-Food Systems: The Case of the GI Corsican Grapefruit (France)
: Existing frameworks o ff er a holistic way to evaluate a food system based on sustainability indicators but can fall short of o ff ering clear direction. To analyze the sustainability of a geographical indication (GI) system, we adopt a product-centered approach that begins with understanding the product qualification along the value-chain. We use the case of the GI Corsican grapefruit focusing on understanding the quality criteria priorities from the orchard to the store. Our results show that certain compromises written into the Code of Practices threaten the system’s sustainability. Today the GI allows the fruit to be harvested before achieving peak maturity and expectations on visual quality lead to high levels of food waste. Its primary function is to help penetrate mainstream export markets and to optimize labor and infrastructure. Analyzing the stakeholders’ choices of qualification brings to light potential seeds for change in the short run such as later springtime harvests, diversification of the marketing channels, and more leniency on the fruit’s aesthetics. These solutions lead us to reflect on long-term pathways to sustainable development such as reinforcing the fruit’s typicality, reducing food waste, reorganizing human resources, and embedding the fruit into its territory and the local culture.
Introduction
The Corsican citrus fruit sector's use of geographical indications (GIs) is considered a success story, given the challenges the actors have had to overcome since the first orchards were planted in the 1960s, and thanks to the economic prosperity spurring new investments today. By linking quality to origin, farmers and commercial agents have collectively built strong brand equity securing a niche in the French continental market [1]. The sector's success, infrastructure, and commercial strategies were built around a flagship product, the Corsican clementine, which obtained a GI in 2005 and represents over 80% of commercial volumes produced in the Corsican citrus fruit sector (about 31,000 tons in 2017 according to regional census data). A very distant second is the Corsican grapefruit's commercial production (at 6500 tons) which also obtained a GI in 2014 [2]. This secondary fruit is considered complementary to the clementine since it extends the growing season. This helps secure the sector's place on the market by extending their commercial season and allowing for greater return on infrastructure investments. the agreements, and the tensions among the GI system's stakeholders about how the product is qualified. By identifying the controversies within the system, we believed that it would be easier to identify possible pathways for change [29]. To do this, we chose to take a relational method: we adopted a product-centered approach, focusing our analysis on the construction of quality along the value chain, from the field to the store. We hypothesized that analyzing the qualification of a GI product through a product-centered approach can support identifying pathways to sustainable development.
A product's description goes beyond its physical characteristics; it is a tangible embodiment of the growers' shared values around quality and production practices [28,30]. Its qualification can also help display the compromises made by the various actors along a supply chain, and the various internal and external constraints they face [1,31,32]. By guiding our analysis with how the product is qualified, it allows us to better understand why certain decisions in the CoP are made during production and on the market, and to pinpoint specific areas that may be problematic.
Whether it is about fruits and vegetables [33], or terroir products from developing countries [34,35], researchers have underscored the fact that GIs are often used as a tool to penetrate globalized mainstream markets by helping to achieve standardized quality criteria that meet those defined by the buyers. According to a GI's international definition (TRIPS Agreement, 1994), there is an intrinsic relationship between the GI products and their origin. We have seen growing interest in developing GIs by companies operating on long/modern channels since reference to the geographical name is an effective tool to reach consumers looking for local or certified products [36]. In the case of the Corsican grapefruit, we are interested in understanding the logic used by suppliers in applying this strategy, and how they perceived consumer expectations and market trends.
Increasingly, we observe how strong downstream forces from giant retailers are influencing the qualification of these terroir products and their corresponding business models [33,37,38]. For example, retailers have been improving the quality of their private label brands by imposing standards such as Global G.A.P. and further developing regional branding associated directly with products that are certified organic or as a GI [39,40]. In these instances, the quality control criteria are based on traceability and conformity needs. Systematically, the taste is quantified and measured based on a certain threshold of sugar, monitored by the Brix. Visual appearance is the primary means of communicating the quality of the product and shapes much of the commercial strategy [41]. The use of GIs to regulate and commercialize products appears to help standardize the products but it also appears to banalize them.
Other studies suggest a more complex relationship between GIs, local products, and mainstream market channels [31]. In the case of the GI Corsican clementine, have shown that GIs constitute niches where terroir and conventional schemes are interacting and combining [1]. The qualification of the product reflects this duality. For example, the Corsican clementine has specific characteristics that distinguish it from other easy peelers, and which are linked to the Mediterranean terroir and its local history. Factors that help distinguish it are its green underside, tangy taste, and the guarantee of freshness evidenced by the fact that it is marketed with the leaves still attached. It is, however, still subject to other forms of standardization as it must meet European criteria for visual quality. Like the Corsican clementine, we examined to what extent these tensions between terroir and conventional market standards exist and impact the sustainability of the Corsican grapefruit's GI and, by extension, that of the Corsican citrus sector.
In this article, our analysis is structured as follows: we begin by providing further context on characteristics of the value-chain for the Corsican citrus fruit sector and of the grapefruit production. We then outline the data collected and used to study the grapefruit's product qualification. We report on the results in two parts. First, we describe the primary quality objectives of the various stakeholder groups along the value-chain, and their key corresponding production and commercialization practices according to prominent themes. Secondly, we provide a typology of stores and wholesalers according to their buying behavior and buying criteria. In our discussion, we address the areas where lock-ins impede change by reflecting on why certain qualities take priority over others. We then identify potential pathways to progress in the sector's overall sustainability and allow for product line extensions within the sector's broader diversification strategy. We conclude by illustrating the ways in which this research demonstrates a new approach to analyzing the sustainability of agri-food systems based on quality and origin products.
Materials and Methods
According to regional statistics, in 2017, the Corsican citrus sector counted 149 farmers growing the clementine commercially, and another 36, producing the grapefruit, most of whom also producing the clementine) [42]. The grapefruit is commercialized according to several quality schemes: 90% of productive surfaces are GI, 9% are organic only, and only 1% follows no official scheme. Of those under the GI, 43% are also organic. The majority of producers follow specific rules under the GI which place controls on production and commercialization practices as well as on product quality. An overview of the main production and commercialization specifications of the GI Corsican grapefruit is summarized in Table 1. As for organic certification, the main requirements relate to production practices, more specifically the exclusion of synthetic fertilizers and genetically modified organisms [43].
Trigger
Physicochemical analysis of the fruit in each plot before harvesting: juice levels > 38%, acidity (A) < 2 g/100 mL of juice, sugar (E) > Brix of 9, for a balance of acidity/sugar (E/A) > 6. The fruit must reach its described color profile on the tree.
Harvest
Must be done by hand. All fruit on tree harvested at once per plot.
Sorting
Fruit with scuffs, markings, or signs of rot or fungus are systematically discarded.
Conditioning
There are no post-harvest treatments. It must be conditioned within the zone at proximity to the orchards. Fruits can be coated with natural wax.
Packaging and Distribution
Traceability Must follow labeling protocol from the orchard through to the store Stocking Calibrated fruits can be stored in a sheltered and well-ventilated room or in a cold room at a temperature between 8 • C and 12 • C for up to 8 weeks.
Packaging
In small open crates there must be a sticker on >50% of the fruit. They can also be sold in flow packs or mesh bags. Distributing fruit in large crates is forbidden.
Final Inspection
Absence of evolutionary defects on the skin, maximum of 10% visual defects is acceptable. 1 Data were extracted from public decree concerning GI Corsican grapefruit. 2 Under regulation of the European Union (commission delegated regulation 2019/428 of 12 July 2018 amending implementing regulation No 543/2011 as regards marketing standards in the fruit and vegetable sector). "Extra class" for superior quality means that there are no defaults, "class 1" (good quality) means that there are few defaults, and "class 2" (commercial quality) means that there is an acceptable amount of defaults.
As for the clementine, the grapefruit's commercialization rests essentially on the marketing of a fresh whole fruit harvested at "maturity" by hand without being submitted to post-harvest conservation treatments. The clementine harvest is typically from mid-September to the end of December, and the grapefruit harvest is from mid-February to early June. This corresponds to a period when there are few grapefruits of comparable quality on the market: Florida's exports are ending, and South Africa's have not yet arrived.
Maturity is based on a minimum sugar to acidity ratio. When physicochemical tests reveal that juice, sugar (Brix (E)), and acidity (A) have reached minimum acceptable levels (notably a given E/A), the harvest is systematically triggered. The product specifications for the grapefruit require that all fruits on the tree must be harvested at once; the clementine's specification imposes several waves of harvesting. In both cases, it is required that the fruits be hand-picked, a task usually fulfilled with the help of temporary skilled labor from Morocco, on 6-month working visas [44].
Today, the Corsican production represents about 0.9% of the total European production of grapefruit, and about 9% of the French market [7]. The majority (80%-90%) of all volumes produced are destined for the French continental market, mainly through chain stores. They pass through four main producer-owned cooperatives who act as conditioning stations and commercial agents, selling into conventional or organic chain stores. The packed fruit is shipped by boat to the port of Marseille (south-east of France) and trucked to the distribution centers in Cavaillon (the logistic hub for fruits and vegetables in the hinterland of Marseille) for subsequent delivery to French stores.
Our goal was to understand the qualification of the Corsican grapefruit by the different stakeholders along the value chain: how they perceive its market, consumer expectations, and community needs; what their quality goals are; and how they achieve them through production and commercialization practices. We adopted a comprehensive qualitative approach, based on semi-directive interviews with a diversified sample set. The value chain was conceptualized into three main groups: growers (2.1); grower associations and commercial agents, including conditioning stations and dispatching to the continent (2.2); and resellers such as wholesalers, brokers, and stores (2.3). Though there were overlapping themes, we prepared tailored interview guides for each group.
We conducted all the interviews during the spring of 2019, mostly in-person, on a one-on-one basis. The timing of the research permitted observation in the field, in conditioning stations, and in distribution centers, during peak production times, and allowed for informal tasting throughout the harvest season.
Growers
We sought to understand the growers' motivations for producing grapefruits, their quality goals, and how their production practices have evolved over time to achieve these goals. Notably, growers were asked to describe their ideal grapefruit, the qualities they believed to be most prized in the market, and then to reflect on the sustainability of the fruit production, given their challenges and constraints.
We built a diverse sample set and interviewed 15 growers, see Table 2. We had representation from all six of the producer organizations (POs), as well as the only two growers who commercialize independently (one was organic only, the other was GI and organic). All growers adhere to at least one quality scheme: one organic, five GI and organic, and nine GI. We connected with farms of various sizes, ranging from the smallest orchard (<1 ha) to the largest (>14 ha), and various levels of diversification, from two to more than four commercial crop varieties per farm. There was no evidence of farms exclusively dedicated to the grapefruit. Note: GI-geographical indication (GI). 1 Growers who are in transition to organic certification are counted under the "organic" category. 2 Categories represent the first, the second, and the third quartiles of the whole population of farmers who grow grapefruit. According to regional statistics, the surface dedicated to grapefruit per farm ranges from 0.1 ha to 15.6 ha, with an average of 3.8 ha.
Commercial Agents
We sought to understand local dynamics and commercial and practical implications of selling grapefruits, as well as gain insights into quality expectations from customers. The interview guide began with a series of general questions related to the commercial operations, then focused on understanding the business relationships between growers and agents, as well as between the agents and various types of stores. Commercial agents were also asked to describe the ideal Corsican grapefruit, and how this ideal compared to the market's expectation of quality.
We connected with the head offices and the logistics teams of the "big three" producer organization commercial entities (OPAC, AgruCorse, GIE Corsica Comptoir) at the Corsica and Cavaillon locations, and with the local organic producers' cooperative (ALIMEA), in addition to the two independent farmers who commercialize their own products. Our visit to the mainland (Cavaillon) gave us a better understanding of the logistical and commercial issues related to marketing the Corsican grapefruit and, more specifically, the nature of the relations between the sellers and buyers.
Wholesalers and Stores
A group of Corsican resellers and stores were surveyed on the quality criteria buyers are looking for; patterns of responses they see amongst buyers; and finally their choice of supplier(s). The interview Sustainability 2020, 12, 7148 7 of 22 questionnaire sought to reveal which product qualities took priority, how the stores selected their suppliers, and the future outlook for the grapefruit based on observable consumer trends.
We assumed that the diversity of modern or traditional distribution channels that we could reach in Corsica might give us insight into how the Corsican grapefruit is being commercialized. We constructed our sample so as to have a good representation of the various commercial channels, including supermarkets who usually source through corporate procurement warehouses and are the main buyer of the grapefruit (i.e., 90% of volume).
We considered five different cities on the island, according to demographic and socio-economic profiles: rural area (Corte, Aleria), urban area (Bastia, Ajaccio) and touristic area (Calvi). We contacted the produce buyers responsible for citrus at four different types of businesses in each location. In total 22 businesses were surveyed across the five zones as follows: six wholesalers, three chain stores, five small independent stores, and seven organic stores. We estimated that this would be an appropriate sample size to observe the diversity of the quality criteria across each or within each type of business.
Results
In order to illustrate how the stakeholders along the value chain qualify the Corsican grapefruit, our results are presented in three parts. In Section 3.1, we describe the quality objectives of the local citrus sector stakeholders, and how they achieve them. This is summarized according to four main themes: origin (Section 3.1.1), naturalness and freshness as reflected in Section 3.1.2, visual appearance (Section 3.1.3), and maturity (Section 3.1.4). This analysis will reveal the tensions between the goal of achieving optimal sweetness and acidy within the constraints of production norms and commercial realities (Section 3.2). Lastly, in Section 3.3 we present insights gained on local wholesalers and stores.
Growers' and Commercial Agents' Quality Objectives
Growers and commercial agents were asked to express what quality criteria are important to them and to describe what production and commercial practices they utilize to achieve them. Table 3 is a synthesis of their answers. We were able to outline the different dimensions of their quality goals developed afterward: origin, naturalness and freshness, visual appearance, and maturity. Table 3. Quality criteria and associated practices at production and commercial levels.
Juicy and Tender
More fertilizer = more juice Regular irrigation along the year Use imagery of water droplets in the packaging
Firm and Healthy (full of vitamins)
Not harvested until they are ready to be commercialized Weekly procurement and promotional plans are set months ahead, order volumes are adjusted according to fruit maturity and demand, JIT distribution system (from tree to shelf in less than a week, stored maximum 2 weeks.) Table 3. Cont.
Quality Criteria Production Practices Commercial Practices
Visual Appearance
No Visual Defects
Drain stagnant water to avoid spots, use pruning techniques to encourage the fruits to grow under the skirt, sorting #1: marked fruit is left in the orchards Sorting #2 in the conditioning stations: fruits that do not fit the GI standards are discarded
Red Cheek
Characteristic of the Star Ruby variety, it is a sign of maturity, if left on the tree any longer the entire fruit turns red/orange.
Smooth Yellow Skin
Characteristic of the Star Ruby variety and of mature trees (>5 years)
Pink Flesh
Characteristic of the Star Ruby, influence of the terroir (sun exposure and soil type) Use the color pink in the packaging
Medium Size
Pruning the trees may reduce the average size and reduces occurrence of extreme sizes.
Fruit are sorted and sold with like sizes.
Balance of Sweetness and Acidity
Physicochemical test to trigger the harvest (according to thresholds established in the GI's code of practice) Later harvest times for sweeter fruits Avoid harvesting during the flowering period Customers do their own physicochemical tests Note: CoP-Code of Practices, JIT-just-in-time.
Origin
According to commercial agents, origin is a desired trait associated with labels for all chain stores that try to find high-quality French products to offer alongside more generic ones. In addition to the GI, chain stores use their private label regional brands to position the product and utilize packaging to communicate the product/brand qualities to consumers (i.e., by referencing place-based qualities and official quality labels). According to one agent, consumers trust the quality of those private label brands and the official quality certifications. That is why chain stores specifically look for products with certifications to commercialize under their own brands. As a commercial agent explained, "The marketing is half taken care of when we have an official quality label".
In order to strengthen the position of the French premium grapefruit in the market, local stakeholders make sure to commercialize their product when there is less competition from other origins.
For the grapefruit we arrive at the end of the season for United States, Israel, and Spain, but before South Africa. So, we have about a month and a half, 45 days, when we are practically the only ones on the market. Because the end of United States is always a bit tired looking, the grapefruits aren't round anymore they are square, and it's the beginning of South Africa and Argentina's harvest which are still a bit green, still on the edge, while at that time, we can produce an extraordinary product. So that's really important for us.
-Commercial agent
It was also reported that French origin is important to consumers buying certified products (GI or organic). It highlights proximity, which is perceived as an environmental benefit and a sign of freshness, since the product travels a shorter distance than similar foreign products. Moreover, interviewed farmers explained that French consumers have more confidence in French regulations when it comes to safety, environmental standards, and overall traceability. They explained how, for a long time now, certain chemical fertilizers and pesticides have been banned in conventional farming in France while still being authorized and used in organic farming in other EU countries like Spain. This perception may come from the fact that there are cases where stricter regulations were applied in France than in other EU countries, as per Regulation 834/2007, Article 34 [45].
Beyond origin, the agents spoke on the strength of the Corsican brand image which evokes many things in the consumer's mind: beautiful beaches, vacation time, sunshine, wilderness, reasoned agriculture, tradition, artisans, high gustatory quality, freshness, etc. As one grower and commercial agent put it, "We have a huge advantage being in Corsica. Corsica helps sell because of its image of nice terroirs, and an untouched beauty through its countryside and nature. So, selling is easier when we highlight Corsica". The Corsican grapefruit is positioned as a natural, local product that is held to high product and production standards. The aromas act as signature to 8 farmers out of 15; it is how the fruit is distinguished from other origins. This said, farmers, as well as commercial agents, do not refer to any specific aromatic features.
Naturalness and Freshness
According to interviewees, naturalness is a strong quality criterion, supported by codified eco-friendly farming (GI or organic), and no post-harvest treatments. Many growers (6 out of 15) felt that the absence of pesticide residue was very important, particularly those in organic agriculture. Most growers felt that their grapefruits were produced more naturally than others on the market and that this was a strength. The GI's product specifications are similar enough to other private certifications that chain stores encourage growers to pursue, such as the Global G.A.P. certification, to facilitate communication on these generic qualification standards.
The responses underscored how the concept of naturalness plays hand in hand with freshness. As one grower explained, "It's the magic of eating a fruit that has just been picked". Therefore, the fruit has to be firm and juicy when consumers buy it. The commercial agents all utilize the same just-in-time (JIT) distribution system that they have devised for the clementine, allowing them to commercialize their fruit with the signature green leaf [1]. In about 3-5 days, the fruits go from the field to the grocery store shelves across France.
JIT relies on a complex equilibrium between annual planning and weekly adjustments. Being so large, the chain stores function very systematically and plan far ahead. For this, they need to have guarantees on volumes and quality ahead of time. During harvest time, there is a bit of wiggle room (to account for the product's maturity), but commercial agents still try to have weekly procurement and promotional plans set months ahead. In order to meet order volumes, the commercial agents will sometimes keep a bit of stock on hand or build it up for a few days in preparation for a big in-store promotion for one of their clients. Even in these cases, the fruit is never stocked for more than two weeks despite the fact that the CoP allows for cold storage up to eight weeks.
Visual Appearance
The visual aspect is one of the factors that was mentioned the most by growers and commercial agents. It deals with many quality criteria that characterize a Star Ruby grapefruit including pink flesh (mentioned by 11 farmers), smooth yellow skin (4 farmers), the red cheek (5 farmers), or the fruit's freshness and firmness. The visual aspect also includes criteria that have great influence on the farmers and commercial agents' practices including the size of the fruit (8 of 15 growers), and the absence of visual defects on the skin (all growers).
Farmers are aiming at producing a medium-sized fruit because it brings the highest prices and is the juiciest. While they cannot control this factor, they aim for consistent sizing, avoiding extremes, with golf ball or bowling ball sized fruits that cannot be commercialized. One grower reported how pruning helps control the size of the fruit, "When the trees are carrying higher loads, in general, the size of the fruit is more consistent, it is good. As soon as they are not full enough, there are big variations". Different clients have different strategies when it comes to selling the grapefruit and so the commercial agents cater to those different demands by offering a range of packaging options. Extreme sizes (too big, too small) are discarded, larger fruits tend to be sold by the unit, medium-sized in flow packs of two, and smaller ones in flow packs of four or in mesh bags.
When it comes to the visual aspect, the top priority for farmers and commercial agents is to reduce the visual defects from marks, dents, and bug bites. When the fruit is marked, it cannot be commercialized under the GI, or the secondary market for transformation. Over the years, growers have improved their pruning techniques to create sunlight chimneys, so the productive branches grow towards the middle, protected by the skirt (foliage). Still, important volumes of grapefruits are discarded (20% to 50%) in the orchards and in the conditioning stations.
Even growers commercializing through organic market channels agreed that, after being reassured about origin and quality by the GI label, consumers buy with their eyes and that is why they have developed a no-tolerance policy for visual defects, i.e., spots or markings. As one grower who produces under both organic and GI labels explains, "If there is a spot on it, garbage". It is recognized that traditional organic store channels are more lenient on the visual appearance since consumers generally know that there are more production constraints. However, there is a feeling among growers that consumers are becoming less tolerant of this even through these alternative channels. Large chain stores selling organic also require the GI. Therefore, the visual standards are still very strict for most organic grapefruits. Those stores are becoming stricter on marks for conventional and organic fruits, especially with the new trend in using flow-packs of two or four fruits. Commercial agents reminded us that it is not like in a 15 kg case which, according to the GI product specifications, allows up to 30% of the fruit on top level to have marks. In a two-pack, one fruit with marks represents 50% and this can easily deter the customer. Unfortunately, this trend is leading to a higher waste percentage.
Maturity
Notwithstanding fundamentals such as origin, freshness, naturalness, and absence of visual marks, all the commercial agents agree that the grapefruit has to taste good. One commercial agent remarked on its importance, "The gustatory qualities need to match the image. And we have a very nice image". When they talked about taste, they always referred to the balance of sweetness and acidity which is mostly associated with the maturity of the fruit.
Having the fruit maturing fully on the tree while achieving an internal balance of sweetness and acidity is a priority shared by all farmers, GI and/or organic. This equilibrium is generally reached in March. Harvest is systematically triggered when the fruit meets the required internal standard through physicochemical analysis. Teams of seasonal workers who arrive in October to harvest the clementine also harvest the grapefruit just before leaving Corsica.
As reported by commercial agents, demand currently far outpaces supply. As long as they reach the minimum quality standards, they can commercialize their grapefruits as soon as the market is ready for them. One agent explained how they do not have to work too hard today to communicate on specific product qualities, "What's the use in marketing if there is not enough product? On the contrary, I am hiding. When I sell a load, I try not to mention it to my other clients because then they will ask me for some, and I don't have enough for everyone".
The Balance between Sweetness and Acidity at the Crossroads between Production Needs and Commercial Realities
Commercial agents recognize that the Corsican grapefruit's success is in large part due to being able to consistently find the balance between sweetness and acidity (for GI CoP specifications see Table 1). Since they have focused more on this quality criteria, they have seen prices slowly increase to reflect the higher standard. The rep further explained, "We started very low, and little by little we were able to increase the price. Today we are higher than Spain and Turkey, but lower than Florida. We're positioned today at a premium price".
Nonetheless, this physicochemical analysis approach to evaluating fruit maturity has limits. Some growers feel that quality is compromised when the harvest period is systematically triggered too early based on market openings. The growers prefer when the fruit is sweeter and think this could appeal to a larger segment of the market. They envision that taste-testing should be broadly employed as an added measure for quality control.
Commercial agents recognize that, early in the season, though the fruit meets the GI's minimum acidity/sugar ratio and juice percentage, it is sometimes on the fence in terms of gustatory quality since it is still too acidic for most people's taste. One agent even went so far as to admit that, in the early season, "you have to be a hardy person to eat it". Another reason to begin later is that they feel the grapefruit is a spring/summer fruit people like to eat when the weather warms up.
Commercial agents would like to see the harvest later in the year by a few weeks, but they feel the pressure from growers who want to start harvesting as early as possible so they can finish sooner. Another rep echoed this sentiment by explaining, "The best time to sell is April-May-June, but if you ask the growers when the best time to harvest is, they'll tell you it's in November. It's stressful for them especially since in the springtime there is the flowering, so you either need to pick before or after". The more the fruit stays on the tree, the sweeter it gets. Leaving the fruit on the tree longer, however, leads to other technical challenges and quality problems. If the growers keep the fruit on the tree until the summertime, the tree suffers as it begins to produce new fruit at the same time as the old. When harvesting, farmers risk knocking off the young fruit. They are also adding another type of spot from fly bites more common in May/June as the weather warms.
Selling average tasting fruit is possible because of the way grapefruits are currently evaluated on the market. As pointed out by most growers, in the market, more importance is placed on external quality markers: consumers wanting a homogenous looking medium-sized grapefruit with no "defects". One farmer summed up well the feelings of most, "The market demands a nice-looking fruit. Consumers automatically approach the better-looking fruit. So, you know what we do? We cave to the demands of the market and only sell the good-looking fruit". Moreover, though visual appearance is thought to strengthen the GI's image, it is not always considered a product differentiator as one independent organic agent explained, "In terms of aesthetics, a nice foreign grapefruit and a nice Corsican Grapefruit, they are both nice fruits. We're not going to say that the Corsican one is better, it's not true. The main benefits are certainly the health benefits of the fruit (the absence of chemical residues)".
This issue of quality is acceptable for local stakeholders because the system has proven to be effective and profitable until now. Though commercial agents have received a bit of negative feedback on the quality for being too acidic, overall, it meets customer expectations and the prices today make it a profitable fruit to sell. It is further helped by the fact that it is commercialized at a time when there is relatively little competition from other origins. At this current (small) volume of production, commercial agents do not think prices in conventional or organic could rise much more before becoming too expensive for the average consumer.
Insight from the Inquiry on Local Wholesalers and Stores
When asked about their expectations for the Corsican grapefruit, local wholesalers and stores systematically report on seasonality and freshness. Otherwise, expected qualities differ amongst them. We were able to build a typology (Table 4), according to how each store realizes their local citrus procurement, and the importance they give to a label, the expected origin, and the expected grapefruit. The traditional type comprises of small and medium-sized independent retailers who adhere to traditional buying behavior more common before chain stores arose in the 1980s [33]; they buy grapefruits directly from a producer or another wholesaler they know. Because they trust their suppliers, they do not need labels to guarantee origin or quality of products. When they buy a Corsican product, they expect it to be produced naturally, without pesticides. They are looking for mature, good-looking fruit. Grocers can often wait until the fruits are sweeter, to place orders. To them, price is also important.
The pink and juicy type and the tasty type are quite similar to the traditional type in that they are made up of independent stores and wholesalers. However, both are less specific on their desired product qualities for the grapefruit. The pink and juicy type looks for organic local products even if an organic label is not provided. They also look for the fruit to be mature as indicated by the color and the juiciness. The tasty type does not look for any labels in particular; they just want the fruit to be tasty.
The conventional type deals with chain stores or organic stores that buy local grapefruits through centralized distribution channels. They require the quality labels and rely on them as a way to guarantee the French or Corsican origin and to communicate it to consumers. These stores aim to buy a good-looking fruit; visual appearance (zero default and size) is essential.
There is a hybrid type between the conventional type and the tasty type called the tasty French type. This is comprised of chain stores (conventional and organic) which have the liberty to purchase directly from local wholesalers, but which require the use of national labels (GI, organic), as directed by their corporate head office. For these stores, they prioritize local products and want them to be tasty as well.
These results corroborate with the opinions and sentiments about the market expressed by the other local stakeholder groups: visual appearance, freshness, and naturalness are central quality criteria. Nonetheless, this typology shows a more balanced set of quality expectations between internal and external factors than previously expressed. First, it shows that not all the chain stores belong to the conventional type. Some belong to the French tasty type which means that there are different coexisting procurement philosophies, and some of the stores are open to other forms of qualification where maturity and taste could be as important as visual appearance. Second, wholesalers and independent stores show another procurement logic which could be mobilized by farmers and commercial agents: they are looking for a mature fruit, hence they are ready to wait for the fruit to be sweeter and they do not need any label to guarantee the origin or quality of the product.
Discussion
The GI Corsican grapefruit appears to be a commercial success as evidenced by steady price increases and the fact that local actors have been able to capture the added value [2]. Despite this success, our results revealed that the ways local stakeholders prioritize and achieve their quality objectives show important nuances, tensions, and shortcomings along the value-chain that degrade the system's sustainability. In this section, we will discuss how analyzing the qualification of the GI, taking a product-centered approach, helped us identify potential pathways toward sustainable development.
By taking a product-first approach, we observe four main issues or lock-ins which potentially threaten the scheme's sustainability (Figure 1). These are organized in four interlinked quadrants. Moving clockwise, we start with the internal generic quality standards (taste, aromas, etc.); the external quality standards (visual appearance); the commercial practices (market channels); and finally, seasonal and labor constraints. the system's sustainability. In this section, we will discuss how analyzing the qualification of the GI, taking a product-centered approach, helped us identify potential pathways toward sustainable development.
By taking a product-first approach, we observe four main issues or lock-ins which potentially threaten the scheme's sustainability (Figure 1). These are organized in four interlinked quadrants. Moving clockwise, we start with the internal generic quality standards (taste, aromas, etc.); the external quality standards (visual appearance); the commercial practices (market channels); and finally, seasonal and labor constraints. In Section 4.1, we illustrate how the GI CoP is mainly being used to comply with conventional standards (right quadrants of Figure 1); Section 4.1.1. explores how "naturalness", an important product differentiator, is not actually imposed in the CoP; and Section 4.1.2. discusses how quality control is mainly based on physicochemical standards and a very low tolerance for visual defects. We describe how the qualification of the grapefruit is subject to a broader group of stakeholder decisions related to business constraints (left quadrant of Figure 1), such as using foreign labor and JIT logistics (further elaborated in Section 4.2). Finally, in Section 4.3, we discuss potential system evolutions or sustainability pathways (summarized in Figure 2) to re-establish the fruit's typicality, reduce food waste, embed the fruit in the territory, and re-organize the work for year-round employment opportunities.
Using the GI CoP to Comply with Conventional Standards
The GI CoP is currently being used as a tool to measure the product against conventional standards [33] rather than to exemplify and protect the qualities linked to its origin that help differentiate it on the market. The GI was designed to help homogenize and elevate product quality standards but above all, it is used as a marketing tool to help penetrate modern market channels [34,35,46]. After our analysis, it is unclear if any of the intrinsic qualities of the fruit truly differentiate it from other grapefruits on the market. It is concerning that the grapefruit is marketed as a premium product, and on this basis, expected to have high gustatory qualities, because this leads us to question whether the current points of product differentiation will be sufficient to remain competitive in the long-run. In Section 4.1, we illustrate how the GI CoP is mainly being used to comply with conventional standards (right quadrants of Figure 1); Section 4.1.1. explores how "naturalness", an important product differentiator, is not actually imposed in the CoP; and Section 4.1.2. discusses how quality control is mainly based on physicochemical standards and a very low tolerance for visual defects. We describe how the qualification of the grapefruit is subject to a broader group of stakeholder decisions related to business constraints (left quadrant of Figure 1), such as using foreign labor and JIT logistics (further elaborated in Section 4.2). Finally, in Section 4.3, we discuss potential system evolutions or sustainability pathways (summarized in Figure 2) to re-establish the fruit's typicality, reduce food waste, embed the fruit in the territory, and re-organize the work for year-round employment opportunities.
reputation than on origin [15]. If the fruit, for example, is unable to reach its full aromatic profile and be distinguishable by taste, or if it simply falls short of customer expectations, it could degrade the reputation of all Corsican citrus fruits. Thus, there are high stakes for specifying the GI Corsican grapefruit, collectively, to strengthen the link to its terroir [15,17].
Having discussed the identified lock-ins of the GI Corsican grapefruit, we can envision solution pathways for long-term sustainable planning as illustrated in Figure 2. Beginning in the bottom right quadrant and moving clockwise, we can visualize as a solution, accepting and embracing minor visual defects (Section 4.3.1) which could help in long-term strategic planning to further define the fruit's typicality and reduce food waste. Moving forward with these solution pathways would require consensus among the collective to evolve the GI's CoP [13].
The sector's long-term diversification strategy rests on introducing new fruits to the mix to produce a cohesive basket of goods and on optimizing all potential market channels. If the local actors introduce a new fruit that is harvested just after the clementine, then the grapefruit could be harvested later. This would allow growers and commercial agents to hire workers on an annual basis, thus, reducing the pressure to harvest so early. Later harvests paired with cold storage could help diversify distribution channels to reach the local spring/summer market. Overall, this could have a
Using the GI CoP to Comply with Conventional Standards
The GI CoP is currently being used as a tool to measure the product against conventional standards [33] rather than to exemplify and protect the qualities linked to its origin that help differentiate it on the market. The GI was designed to help homogenize and elevate product quality standards but above all, it is used as a marketing tool to help penetrate modern market channels [34,35,46]. After our analysis, it is unclear if any of the intrinsic qualities of the fruit truly differentiate it from other grapefruits on the market. It is concerning that the grapefruit is marketed as a premium product, and on this basis, expected to have high gustatory qualities, because this leads us to question whether the current points of product differentiation will be sufficient to remain competitive in the long-run.
Naturalness, an Important Differentiator That Is Associated with the GI But Not Imposed in the CoP
Naturalness was cited as an important factor by everyone and reinforced through a no-tolerance policy for post-harvest treatments and by the Corsican brand image (wild, natural). It is a factor that is market-driven and is growing in importance as demand rises for healthy products free from chemical residues [41].
Despite encouraging reasoned agricultural practices, none are written and required in the GI's CoP. Stakeholders cannot, therefore, guarantee that the GI Corsican grapefruits are more environmentally friendly than conventional ones. As is the case with many GIs in Europe [47][48][49], the impression of naturalness is an intangible factor that is not actually associated with any explicit eco-friendly rule written in the CoP. Though GIs are often viewed as more environmentally friendly, this is not their core purpose. In our case, the GI acts rather as a framework within which a wide variety of agricultural practices can be applied.
Thus, to comply with chain stores' supplementary expectations, growers are encouraged to pursue other third-party certifications such as the Global G.A.P. or Global Food Safety Initiative (GFSI) (such as BRC, IFS, etc.) [50] and organic (more involved). The former offers further guarantees on traceability; the latter helps meet growing consumer demand for certified organic products.
Quality Control Based on Physicochemical Standards and Very Low Tolerance for Visual Defects
In their collective approach to define and control quality, the sector has utilized two basic forms of quality control which have helped them raise and homogenize quality standards for taste and visual appearance [2,44]. These qualifications do not help define or protect the products' specificity [1,33,41].
Currently, flavor is only assessed based on physicochemical tests measuring the balance between sweetness and acidity. This does not do much to help guarantee or identify the sensorial profile that distinguishes the Corsican grapefruit from others [51]. There is a dissonance between the local stakeholders' representation of the ideal Corsican grapefruit and that which the consumers have to guide them in their sensorial experience [28]. This is reinforced by the fact that consumers in Europe and in France generally still do not fully grasp the significance of GIs in terms of what they represent for quality (especially for typicality) [4,15]. Nevertheless, it is important for local stakeholders to continue to strive to educate consumers on the significance behind the GI.
Similarly, controls have been put in place in order to commercialize only the "good looking fruit". Recall that the product specifications only allow fruits in category I or extra to be commercialized under the GI label. This echoes the approach the local stakeholders took to integrating visual standards into the clementine's CoP [1], however, then they also identified qualities that spoke to the product's specificity. For the clementine, the specificity is evidenced by the leaf and the green underside, a mark showing the effects of the local climate on the color of fruit. Leaving the green skin was an important decision that supported the group's marketing strategy, and also had important implications in the underlying production practices which helped link the product to its terroir. Selling the fruit with the leaf and with some green skin shows that the product did not undergo any post-harvest chemical treatments and that the fruit was carefully harvested at maturity with at least two passes on the tree [1,11].
By contrast, the criteria defined for the grapefruit falls short of elements that could help specify it. The CoP does describe the product as having a "red cheek" (or an orange-red spot) which is a sign of maturity, however, not all the fruits are required to have this in order to be harvested. Variability in quality might be reinforced by one of the rules in the CoP: all fruits in the tree must be harvested at once, and all the trees in one same orchard block must also be harvested at the same time.
The way grapefruits are qualified and harvested has negative consequences on the scheme's sustainability, especially from an environmental lens. The current system leads to high levels of food waste as anywhere from 20-50% of the fruit that is produced is discarded. This represents a drain on the local environmental and economic resources like water, fertilizers, time, etc.
The organic GI fruits make no exception as they are also held to the same high visual standards. Qualifying the fruit according to these visual standards significantly reduces the profitability of organic production under the GI, where yields are typically lower to begin with and the fruits tend to have more visual defects. Further, since prices of the GI are already at the higher end of the spectrum, the organic growers in the GI feel like they cannot increase the price much to compensate for these losses. Prices for fruits certified under both labels are barely higher than those only certified under the GI. They are at prices comparable to the market leader, Florida, who is the main alternative for the Corsican grapefruit. It is thought that these prices could be maintained so long as demand for Corsican grapefruit continues to surpass supply.
Thus, there is less incentive to pursue stricter agroecological measures. Our case provides an example of how pairing the GI label with the organic one is actually helping reinforce the conventionalization of organic [50,51]. This paradox is strengthened and stabilized by consumer perceptions of the "perfect fruit" [40] and their willingness to pay a premium for origin products [13] -a dimension that we did not seek out in our investigation but that surfaced in many of the interviews.
Despite being fit for human consumption, a large proportion of the grapefruits are discarded, usually due to visual defects. There are currently no alternative markets for the fresh or transformed fruit (juice, jellies, etc.). The CoP also does not allow for fruits that do not meet the GI standard to benefit from the GI label as a transformed product (e.g., food "made with 100% GI Corsican grapefruit" is not authorized). Furthermore, local stakeholders do not envision selling the marked fruits using the label "origin France" as they feel that this would compete with the GI, potentially reducing price premiums.
Foreign Labor and Just-in-Time Logistics Constraints
Now that we have discussed the lock-ins observed related to the CoP (right side quadrants of Figure 1), we turn our attention to the left-hand side: the broader stakeholder business decisions. We see how these have a symbiotic relationship with the CoP and directly impact the product's qualification.
Firstly, we have observed how labor acts as a constraint: seasonal workers on six-month working visas are hired for the beginning of the clementine harvest in October, which means that they must wrap up harvesting and leave by the end of March. The minimal ratio between sugar and acidity allows growers to do this. They can start harvesting as early as February, when the fruit can still be acidic. This reinforces the idea that the grapefruit is a secondary product, thought to complement the clementine production [2]. It also shows the compromises being made on the qualification of the grapefruit, as it must bend to fit the needs of the more dominant clementine production including its infrastructure and logistics (JIT distribution) systems.
In the 2000s, the stakeholders considered an alternative commercial model, one where they would store the fruit to sell them on the local summer tourist market [2]. This explains why the CoP authorizes storage for up to eight weeks. Despite this, today producers and commercial agents do not utilize this option because their views on what qualifies as "fresh" erroneously aligns with what they know about the Clementine, i.e., that it is a fragile fruit whose quality characteristics degrade at a fast rate [52]. Since freshness is a top-quality criterion, a JIT distribution system has been devised for the clementine. As a complementary product, the grapefruit production has, by default, adopted the same distribution system, despite the fact that the fruit's qualities degrade at a much slower rate [39] and that local experts have found that the fruit can even benefit from cold storage which helps refine it, concentrating sugars and thinning the skin [2,[53][54][55].
The sector is selling the image of freshness with the idea that consumers are eating a fruit that was just hand-picked from the tree. This reinforces the image of naturalness (associated with eco-friendly farming practices) and health benefits (fresher fruits would have more vitamins) [38]. In the case of the Corsican grapefruit, this is potentially restricting the commercial and distribution options of stocking the fruit for sale during the spring-summer tourist market season.
From Provenance to Embeddedness: Identifying Levers for Change
The Corsican citrus fruit sector uses a differentiation strategy based on quality and origin, yet as we have demonstrated, the Corsican grapefruit's link to its terroir is weak as it is not based on specific and tangible qualities. The stakeholders mainly rely on the seasonality of the fruit (market gap) and the strong reputation of the Corsican brand image to sell it [14]. The grapefruit is positioned as a premium product, after Florida, which remains the market leader in terms of quality, as reflected in the price. The Corsican grapefruit's success is therefore skewed, relying more on localization and its reputation than on origin [15]. If the fruit, for example, is unable to reach its full aromatic profile and be distinguishable by taste, or if it simply falls short of customer expectations, it could degrade the reputation of all Corsican citrus fruits. Thus, there are high stakes for specifying the GI Corsican grapefruit, collectively, to strengthen the link to its terroir [15,17].
Having discussed the identified lock-ins of the GI Corsican grapefruit, we can envision solution pathways for long-term sustainable planning as illustrated in Figure 2.
Beginning in the bottom right quadrant and moving clockwise, we can visualize as a solution, accepting and embracing minor visual defects (Section 4.3.1) which could help in long-term strategic planning to further define the fruit's typicality and reduce food waste. Moving forward with these solution pathways would require consensus among the collective to evolve the GI's CoP [13].
The sector's long-term diversification strategy rests on introducing new fruits to the mix to produce a cohesive basket of goods and on optimizing all potential market channels. If the local actors introduce a new fruit that is harvested just after the clementine, then the grapefruit could be harvested later. This would allow growers and commercial agents to hire workers on an annual basis, thus, reducing the pressure to harvest so early. Later harvests paired with cold storage could help diversify distribution channels to reach the local spring/summer market. Overall, this could have a positive effect on the product's gustatory qualities. All of this necessitates, however, further anchoring and embedding of the fruit into its territory (Section 4.3.2).
Precedents of Unconventional GI Fruits Sold in Supermarkets
Being more lenient about the visual aspect to reduce the proportion of discarded fruit could be a lever for encouraging eco-friendly practices such as organic farming. This idea has been observed for other French fruits and vegetables under GI [47]. Corsican citrus stakeholders have already done it with their flagship product: they were able to convince chain stores to sell a smaller, slightly green, and more acidic clementine in a world where big, orange, and sweet prevail [1]. By further specifying their own product, local actors can assert greater control, leaving less room for downstream forces to impose expectations on product appearance [17].
Furthermore, as our results show, chain stores are not a homogeneous group [33] and do authorize a variety of standards. This plurality of purchasing behaviors constitutes potential levers for change to help further specify the Corsican grapefruit. The actors can benefit from the open mindedness of certain buyers in Corsica and elsewhere [17] and work to improve their communications to educate the consumer on things like taste [41].
Being more lenient about the Corsican grapefruit's appearance would permit the study of possibilities of transformation for the GI. To that matter, the CoP could allow the use of the GI name to be applied to transformed products such as juices and jams that are made using the fruits that do not currently meet the visual standards of the GI (marked and discarded fruits). The collective is considering ways in which they might evolve the GI's CoP. Some working groups are also looking to invest in a juice factory for this potential scenario.
Perspectives of Embedding the GI Corsican Grapefruit into the Local Food Identity
The local citrus stakeholders have built a strong and cohesive collective organization able to sell Corsican citrus to France and beyond [2,44]. However, qualifying origin products is not only a question of the sector's business organization, but also a question of how the local society interacts with a product and consumes it [6,42]. Ensuring that an origin product is locally embedded informs how resilient its production will be [56]: local products are "at one level, rooted in a specific territorial context, and at the same time, hold the potential to travel to distant markets" [57].
In other cases, as with ours, the Corsican grapefruit and clementine are products recently developed (1960s), and produced primarily for export markets [2,33,44]. When this is the case, the economic sustainability of the sector is the main focus and other elements important to the broader sustainability of the sector may be compromised.
Locally embedding the Corsican grapefruit would mean creating the conditions for the local society to appropriate the product. This perspective is consistent with the work on local products and their embeddedness [17,58]. This concept also builds on recent work in agroecological transition, which acknowledges that transformations in agri-food systems towards greater sustainability take shape at the territorial level, at the intersection between health, food, environment, and agriculture [20,59,60].
From a practical standpoint, this would entail rethinking relationships between producers, marketers, and local traders to reach consumers locally (locals and tourists). Those stores are entries into the local market. They could absorb more important volumes when the tourism season starts in the spring, providing opportunities to utilize more wholly the products produced. As our findings suggest, not all stores rely on official labels to guarantee or communicate quality. Further study is required to understand the potential for growth on the local market, however, we estimate that even if a portion of currently discarded fruit could be better utilized and consumed locally, this could help significantly reduce food waste. It would also help increase awareness and consumption locally helping to further embed the product into the local society.
Exploiting local channels is a way to rethink farming and marketing practices to find solutions for the immediate future since the CoP need not evolve in order to accommodate them. In fact, this would allow the stakeholders to utilize existing underexploited rules. To that matter, some farmers are considering investing in cold storage facilities to be able to increase volumes and/or to extend their selling season into the summer tourist market. It could also provide the space and context needed to establish the fruit's typicality by playing on a more informal locally embedded qualification [16], and by removing existing constraints put in place by downstream forces within conventional and longer channels [15].
Diversifying market channels would finally help reduce the sector's dependence on mainstream continental chain stores and help mitigate against market shocks. We are reminded how vulnerable the sector is to these shocks by recent events such as the "gilet jaune" movement, strikes in sea transportation, and the COVID-19 pandemic and the resulting reduced access to foreign workers.
Conclusions
Since grapefruit production was developed to support the sector's flagship product (the clementine), its qualification has largely been devised for practicality. The quality control is based on conventional market standards for visual appearance, freshness, and instrumental evaluation of taste. This strategy has allowed Corsican producers to raise the quality of their fruits to levels comparable with other market leaders such as Florida. They have also been able to raise their reputation and relative quality based on links to origin which associates the naturalness of the island to the nutritional and ecological advantages of their product.
With this commercial success, the actors find themselves in an ambiguous situation today: they benefit from an undeniable competitive advantage on the French market, as they are the only place in France with the right climatic conditions for citrus production [6], yet they are not considered the most premium product on the French market [7]. Their niche production, which is sold almost exclusively through mainstream grocery stores, is, thus, still constrained and confined to times when there is a gap in the market between productive times from other world zones.
The Corsican grapefruit is essentially commercialized at a time in the year when the producers and commercial agents feel that the quality is compromised; it is still too acidic and does not express its full flavor profile. Even though the fruit is commercialized under the protection of a GI, its flavor and typicality are under-expressed. In this way, the GI is primarily used to help penetrate mainstream markets, relying on reputation of place, rather than as a tool to express and control origin-linked quality standards, or used for sustainable development. The balance of this system has brought on a certain commercial success, but there is growing concern that it cannot be sustained through other long-term market and climate-related threats.
The results of our study have been presented and have proven useful to local industry stakeholders and the GI governing boards, helping to stimulate discussions about the sustainability of their local GI food system. They allowed for constructive conversations reflecting on potential evolutions within their sector to ensure its sustainability.
This first inquiry could be supported by further market and consumer research seeking to better understand quality priorities for citrus and for grapefruit among French and European consumers, as well as market potential across diversified and local channels. Further study on the physiological evolution of the fruit over the season and its effect on the grapefruit's aromas could help lead further reflections on seasonality and typicality. This would help put in place gustatory controls, through sensory analysis in the GI. Finally, further reflection into the organization of seasonal work would help address socioeconomic issues related to the employment of foreign labor.
In this paper, we have shown how using a product-centered approach to understanding the product qualification along the value-chain can provide a useful framework to support sustainable long-term planning in GI food systems. Most notably, this approach helps to identify the nuances and practical implications associated with an agri-food system's strengths and weaknesses, which in turn, allows us to identify more specific solution pathways. In our case, this approach allowed us to identify key lock-ins based on tensions and compromises. Understanding these has helped us, and the local actors, identify potential solution pathways and plan for long-term sustainability goals related to reorganizing human resources, building upon the fruit's typicality, reducing food waste, and embedding the product into local society.
By proposing a conceptual framework adapted to the case of the GI Corsican grapefruit, we are helping refine an approach at the territorial level that is part of broader global challenges (agro-ecological transition and transdisciplinarity) [52]. Focusing on the product qualification ultimately allows us to grasp the situation fully. Our study raised issues of governance and of collective choices that should be explored further. Faced with its results, local actors can, by mobilizing and rearranging the alternatives thus exposed, build their own pathways towards sustainable development.
|
2020-09-03T09:04:21.429Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6dc7f8b7dc73767666d25bf822955eda8e8dbbcf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/17/7148/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "889eeedc854ce35f1a9c15414a2faa66194c2ed0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
6428610
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Network Coding Schemes for Differential Chaos Shift Keying Communication System
In this paper we design network coding schemes for Differential Chaos Shift Keying (DCSK) modulation. In this work, non-coherent chaos-based communication system is used due to its simplicity and robustness to multipath propagation effects, while dispensing with any channel state information knowledge at the receiver. We propose a relay network using network coding and DCSK, where we first present a Physical layer Network Coding (PNC) scheme with two users, $\mathcal{A}$ and $\mathcal{B}$, sharing the same spreading code and bandwidth, while synchronously transmitting their signals to the relay node $\mathcal{R}$. We show that the main drawback of this design in multipath channels is the high level of interference in the resultant signal, which severely degrades the system performance. Hence, in order to address this problem, we propose two coding schemes, which separate the users' signals in the frequency or the time domains. We show also in this paper that the performance of the Analog Network Coding (ANC) with DCSK modulation suffers from the same interference problem as the PNC scheme. We present the analytical bit error rate performance for multipath Rayleigh fading channel for the different scenarios and we analyse these schemes in terms of complexity, throughput and link spectral efficiency.
that the network coding operation carried out by the relay is reversible and is also known by the users. Then, each user can remove its own data message in order to detect the message sent by the other node. This method for the PNC scheme requires only two time slots to exchange one message from one node to another. It should be mentioned that in network coding schemes, a time slot for a given source or relay is defined as the required time to transmit its data packet [8].
Furthermore, in the ANC schemes, the relay simply transmits the received signal superimposed to the pair of user nodes [5]. The linear mapping is naturally done in the physical channel and the signals are decoded by the user nodes. In this scheme, due to the lack of noise cancellation at the relay side, the noise is forwarded with the superimposed signals, which leads to the degradation of the system performance.
Contrary to the ANC and PNC schemes, SNC technique does not need any precise user synchronization and just a network synchronization is required. The SNC is a nonphysical-layer network coding and requires three time slots to exchange a data message between the two user nodes [7]. The first and the second time slots are used by user A and user B respectively in order to transmit their signals to the relay. The signals sent by the two users are separately decoded and mapped by the relay before forwarding the combined signal to the two users in the third time slot.
Recently, there has been research focus on chaotic signals due to their advantageous characteristics that can be used in the field of wireless communications [10][11][12][13][14][15][16][17][18][19]. This is principally associated with the inherent wideband characteristics of these signals, which make them well suited for spread-spectrum modulation systems [11,20,21]. Chaotic modulations have similar advantages to all other spread-spectrum modulations, including the mitigation of fading channels and jamming resistance exempli gratia. Furthermore, the low probability of interception (LPI) in chaotic signals [22,23] and excellent correlation properties [21] allow them to be one of the natural candidates for military communication scenarios.
Various digital chaos-based communication schemes have been evaluated and analysed including coherent chaos-shiftkeying (CSK) [20,24,25], chaos-based DS-CDMA [10][11][12] and non-coherent Differential Chaos Shift Keying (DCSK) [13-15, 26, 27]. In CSK and chaos-based DS-CDMA, chaotic sequences are used to spread data signals instead of conventional spreading codes used in DS-CDMA. One of the main drawbacks of coherent chaos-based communication systems, such as the CSK, is that they require chaotic syn-arXiv:1505.02851v1 [cs.IT] 12 May 2015 chronization which is non-trivial. For instance, the chaotic synchronization proposed by Pecora and Carroll in [28] is still practically impossible to achieve in a noisy environment and hence the coherent system can not be used in realistic applications.
The research presented in this paper is based on one of the most commonly used non-coherent chaotic modulation techniques known as differential chaos shift keying (DCSK). In this system, each bit duration is split into two equal slots, where the first slot is allocated to the reference chaotic signal, while the second slot is used to transmit either the reference signal or its inverted version, depending on the bit being sent. The analytical performance derivation of DCSK communication systems has been presented in [29] for fading channels and in [13-15, 26, 27] for cooperative and Multiple-Input Multiple-Output (MIMO) schemes. Moreover, DCSK has been used in several systems in the literature, such as the design of new ultra-wideband systems based on DCSK, Multi Carrier DCSK (MC-DCSK) and Frequency Modulated DCSK (FM-DCSK) schemes [16,[30][31][32].
Motivation: The DCSK modulation is chosen as candidate for network coding schemes due to its various advantageous. In fact, the chaotic signal generation and synchronization are not required at the receiver side of DCSK demodulator [13-15, 26, 27] which makes this system easy to implement [33]. On the other hand, the common points between DCSK and differential phase shift keying (DPSK) modulations is that both are non-coherent schemes and do not require channel state information at the receiver to recover the transmitted data [34][35][36]. However, DCSK system is more robust to multipath fading environment than the DPSK scheme [35] and is suitable for Ultra-Wide band (UWB) applications [16,17,[35][36][37].
Related work: In [38][39][40][41][42], ANC and PNC schemes have been proposed for spread spectrum systems and their performance was compared in [41] for different scenarios and wireless channels. However, it should be highlighted that the proposed schemes with coherent detection assume perfect knowledge of the channel state information (CSI), where the assumption of having perfect CSI at the relay or user ends is impractical [43]. Therefore, a DCSK modulation is preferred to avoid the strong unrealistic assumption of the perfect channel knowledge. In [38,39], each user in the system is assigned a specific spreading sequence, where the destination node applies the spreading sequence of the desired source node in order to decode its message. In this case, the decoder requires a prior knowledge of the list of all users with their specific spreading sequences in order to decode the transmitted data without facing any multi-user interference. Recently, an analog network scheme for Multi-User Multi-Carrier Differential Chaos Shift Keying Communication System was proposed in [44]. This system support the multi-user transmission but is designed for ultra wide band communications. The required band for this MC-DCSK system is M times the band of the conventional DCSK one where M is the number of transmitted bits.Therefore, for limited available bandwidth, a DCSK system can be preferred for such application.
Contributions: This paper proposes novel network coding schemes for non-coherent chaos-based spread spectrum sys-tems. The structure of this network is considered as a full duplex wireless system with one single relay, where each user node uses DCSK modulation. The motivation to use DCSK is associated with its offered advantages described above.
The novel contribution of this paper are summarized as follows: 1) The first proposed scheme, which is denoted as scheme 1 in this work, is physical layer network coding DCSK (PNC-DCSK). In this scheme, users A and B share the same spreading code and bandwidth and transmit their signals to the relay synchronously. The relay then decodes and maps the combined received signal and forwards the resultant signal to the users. We present the performance analysis of scheme 1, where we show that it suffers from a strong interference originating from the cross product of the user's signals at the relay's correlator. 2) Hence, in order to address the interference problem of scheme 1, two other coding schemes, denoted as scheme 2 and scheme 3, are proposed in section IV. In these schemes, the user signals are separated in the first phase of cooperation in time domain for scheme 2 and in frequency domain for scheme 3. This paper shows that both schemes reduce the interference as well as enhance the bit error rate performance of the system. Scheme 2 takes advantage of multiplexing in time domain and is equivalent to SNC scheme, whereas scheme 3 is equivalent to PNC-DCSK, requiring twice the conventional PNC scheme's bandwidth due to multiplexing in frequency domain. 3) We analyse the corresponding interference levels and derive the analytical end-to-end bit error ratio expressions for scheme 2 and scheme 3 over multipath fading channel for different scenarios. Finally, we analyse the different BER performances of the PNC, ANC, SNC schemes, the throughput and the link spectral efficiency of the two systems. To the best of authors knowledge, there is no previous publications investigating on the problem of DCSK modulation with network coding schemes. The remainder of this paper is organized as follows. In Section II, the DCSK spread-spectrum communication system is briefly presented. Section III is dedicated to the design and the performance analysis of the proposed PNC-DCSK scheme (i.e scheme 1). Section IV presents the design of the scheme 2 and 3. Additionally in this section, the analytical bit error rate (BER), the throughput and the link spectral efficiency expressions of the two schemes are derived under multipath Rayleigh fading channels. Finally in Section V, the simulation results are presented and the obtained analytical expressions of this work are evaluated with some concluding remarks.
II. DCSK COMMUNICATION SYSTEM
The broadband nature of chaotic signals and their good correlation properties as well as the ease with which they can be generated have made them special type of signals, which can be advantageously used to design and implement spread spectrum communication systems. , +1} is represented by two consecutive sets of chaotic signals: the reference signal followed by the data sequence. Depending on whether the sent bit is +1 or −1, the reference sequence or its inverted version is used as the data bearing sequence. During the i th bit duration, the output of the transmitter e i,k shown in Fig. 1 can be given by where x k and x k−β denote the reference sequence and its delayed version, respectively. Here, β is an integer and 2β denotes the spreading factor which is determined by the number of chaotic samples sent for each bit as shown in Fig. 1(b). In order to demodulate the transmitted bits at the receiver side, the received signal r k is correlated with its delayed version r k+β and summed over a half bit duration T b /2, where T b = 2βT c and T c denotes the chip time. This demodulation process can be performed without any need for channel state information at the receiver side, which is a benefit of the insertion of the reference signal after each symbol. Finally, the sign of the correlator output is then computed to estimate the received bits, as shown in Fig. 1(c).
In this work, the second-order Chebyshev polynomial function (CPF) is employed to generate chaotic sequences due to its simplicity and good performance [45]. For simplicity, the chip time is set to one, i.e. T c = 1, hence the sequence x can be obtained as The variance of the normalized chaotic map with zero is equal to one, i.e. Var(x) = E[x 2 ] = 1, where E[ . ] denotes the expected value operation.
III. PNC-DCSK COMMUNICATION SCHEME Fig. 2 illustrates the topology of the proposed PNC-DCSK system, which we refer to as scheme 1. The users synchro-nization process is carried out by the relay sending a clear-tosend (CTS) message in response to the Request to send (RTS) message sent by the user nodes. For example, in the system shown in Fig. 2, we consider two nodes A and B, who want to communicate with each other and they want to transmit their message frame to the relay simultaneously. In this scheme, the user nodes utilise the same spreading sequence, i.e. chaotic signal, in their data frame, which is sent to the relay node. As a consequence of using the same spreading sequences, the number of the chaotic codes present in the channel is reduced to one and thus lower multiple access interference is experienced by each user and also the power of the useful resultant signal is boosted. The users' transmitted data frames are then superimposed in the wireless channel, when they are received by the relay node. The relay then despreads and decodes the received combined signal. The decoded symbols are then mapped by the relay before modulating and broadcasting them to the user nodes. It should be noted that the relay uses of the same modulation scheme as the nodes, i.e. DCSK modulation.
The PNC-DCSK communication protocol is summarized in table I, where s A and s B represent the data messages of user nodes A and B, respectively. Furthermore, e A , e B and e R refer to the transmitted signals from the user pair to the relay node R and from the relay node to the user nodes, respectively. Additionally,, the transmitted signal represented by [x, sx] expresses the DCSK frame, where the first term represents the reference sequence x and the second term denotes the symbol s i = {−1, +1} multiplied by the reference sequence x. Additionally, s R,D , s R,M are the decoded and mapped symbols of the received superimposed signal at the relay side.
As can be seen from table I, the mapping function at the relay converts the decoded symbols of the received signal r R by mapping 2's to 1 and 0's to −1. Using the DCSK modulator at the relay end, the mapped bits s R,M are modulated into a DCSK frame e R . Once the relay broadcasts its final DCSK data frame to the two nodes, the received information frame is decoded and demapped by the two users.
A. Channel model In this paper, we use a commonly used channel called tworay Rayleigh fading channel [34,35,37]. As shown in Fig. 3, the multipath Rayleigh fading channel with two independent paths is considered for each user. The channel coefficient and time delay between the two rays in the first phase of transmission for user A th are denoted by λ 1,2,A and τ A , respectively. It should be noted that the channel coefficients follow the Rayleigh distribution and in this work, they are considered to be constant, i.e., static, over the DCSK frame period T = 2βT c , while they vary from one frame to another.
Therefore, the probability density function of the channel coefficient λ in this case can be given as the following where σ > 0 is the scaling factor of the distribution representing the root mean square value of the received voltage signal before envelope detection. In this paper, the largest multipath time delay is assumed to be shorter than the bit duration, i.e. 0 < τ << 2βT c . Hence, the different time delay values are considered constant during each phase as they do not cause any significant difference, in other words the intersymbol interference (ISI) can be neglected. Therefore, as a consequence of using DCSK modulator in this network, this scheme benefits from high resistance to multipath interference. Moreover, this system does not require any CSI knowledge at the receiver for decoding the data, since the reference plays the role of pilot signal [35].
B. Performance analysis of PNC-DCSK system
In this section, we analyse the performance of the PNC-DCSK system, where we show that the current form of this scheme, which we refer to as scheme 1, has high levels of interference, which degrades the system performance dramatically.
As mentioned in Section III, in the first phase of relaying, users A and B transmit their signals to the relay R by using the same reference, i.e. spreading sequence. Then, The relay decodes the resultant received signal followed by modulating the decoded bits into a DCSK frame signal and forwards this to the users in the second phase of relaying.
After receiving the RTS message from the relay, the two users transmit their signals to the relay simultaneously. The received signal at the relay r R (t) can be formulated as follow where τ represents the path delay and λ 1,l,A denotes the channel coefficient in the first phase of relaying corresponding to the l th path of user A. Additionally, e(t) is the emitted signal and n(t) represents additive white Gaussian noise (AWGN) with zero mean and variance of N 0 /2. Then, to demodulate the transmitted signals by the users, the received signal r R,k at the relay is correlated by its delayed version r R,k+β and summed over a half bit duration T b before being compared with zero. The decision variable for a given i th bit transmitted from A and B can be formulated as: which yields The first two lines in equation (6) represent the useful signal and the other components represent the intersymbol interference (ISI) and noise. It should be mentioned that the channel coefficients have zero mean and the mean of the decision variable is equal to the mean of the useful signal. Therefore, the mean of the decision variable can be represented as where x 2 k is the transmitted bit energy. Note that for long spreading codes the bit energy E b might be assumed to be constant [24]. However, this assumption is not valid for short spreading codes due to the non-periodic and random behaviour of chaotic signals as reported in [25]. Therefore, since the spreading factor in this work is sufficiently high, the constant bit energy assumption is valid. Note that the mean expression given in equation (7) is obtained based on the fact that chaotic signals, channel coefficients and AWGN are zero mean and the mean product of the delayed version of chaotic signals is also zero, i.e. E [x k x k−τ ] = 0. However, the cross multiplication of the first user's reference signal with the data carrier signal of the second user results in a strong interference term given by the term (6). This term represents ISI, which can be strong enough to severely degrade the system performance.
Additionally, there are 18 other interference components in equation (6) resulting from the cross multiplication of the users' signals. Therefore, considering all these interferences present in the decision variable, it can be inferred that in order to design reliable PNC-DCSK systems some form of interference mitigation is required. Fig. 4 depicts the BER performance at the relay of the PNC-DCSK scheme, when considering transmission over AWGN and multipath fading channels with different interference levels. In order to show the impact of a strong ISI on the system performance for multipath channel, the BER curves with and without the term (s i,A + s i,B ) λ 1,1,A λ 1,1,B E b in equation (6) are plotted. This is done by extracting this term from the decision variable and assuming that it is known at the relay, which is impractical but has been done for the sake of elaboration. In Fig. 4, the BER results are obtained for a spreading factor β = 100 and consider the following multipath channel parameters: E[λ As shown in Fig. 4, the performance of the relay in the PNC-DCSK when considering transmission over AWGN channel is considerably better than the two other scenarios, where multipath channel is considered. This is mainly associated with the fact that for the AWGN channel the number of interference components are reduced significantly. On the other hand, as shown in Fig. 4, in multipath channels when strong ISI represented by the term (s i,A + s i,B ) λ 1,1,A λ 1,1,B E b is taken into account in the decision variable, the performance of the relay is significantly degraded. Moreover, even if this strong ISI term is neglected, the other interference terms in equation (6) affect the relay performance. For example, for E b /N 0 > 25 dB the bit error rate is floored at 10 −2 when we deliberately remove the strong ISI term. This poor performance result shows that the actual design with a DCSK modulator can not be considered as a potential application for PNC schemes.
The above study indicates the limitation of the proposed scheme and the open problem to improve such a system performance by reducing its interference level and thus making the actual PNC-DCSK more reliable. Hence, in the following we propose potential solutions to design reliable PNC-DCSK techniques.
IV. TIME AND FREQUENCY MULTIPLEXED NETWORK CODING SCHEMES FOR DCSK SYSTEM
The majority of the interference terms in equation (6) are generated from the cross product of the users' data signals and the reference signals. In what follows we show how the interference can be mitigated by using time domain or frequency domain multiplexing techniques to separate the signals transmitted by user A and B during the first phase of relaying.
When time multiplexing is used during the first phase of relaying, the user nodes A and B transmit their signals to the relay R using different time slots, T A and T B . We refer to this scheme as scheme 2. Hence, in order to avoid any signal superposition, the two time slots must satisfy the following relation |T A − T B | > 2βT c . The time slot is defined as the required time for a source or relay to transmit one DCSK symbol. In this scheme the user synchronization is not required and the required number of time slots to achieve the end-toend communication is 3.
On the other hand in scheme 3 where the signals superposition is carried out in frequency domain, the two users transmit their signals to the relay at the same time but using different carrier frequencies, f A and f B . Note that when frequency multiplexing is used, the users' frequencies are required to meet the specification of |f A − f B | > W s , where W s ≈ 1/T c denotes the signal bandwidth. Similarly to scheme 2, scheme 3 does not require any users synchronization process and the total time slots required to achieve the end-to-end transmission is equal to 2. Additionally, it is essential to note that scheme 3 requires twice the bandwidth needed for scheme 1 or scheme 2. The use of time or frequency multiplexing results in eliminating the cross product between the users' signals, which contributes to the mitigation of interference. In both schemes, the relay decodes and maps the received signals separately, before broadcasting the resultant DCSK frame, which leads to a considerably lower interference than in scheme 1.
Since the access protocol is the same for time and frequency multiplexing schemes, Table II summarizes the decoding and mapping process that is valid for both scheme 2 and scheme 3. According to this table, in both scenarios, the relay separately decodes the information sent by the two nodes given by s R,A and s R,B . After the bits are mapped into a bit steam s R,M , the relay modulates the mapped bits into a DCSK signal frame e R and transmits it to the two users in the second phase of relaying. For instance, in order to recover the transmitted data sent by user A to user B side, which is represented byŝ A in Table II, the received signal from the relay is first despread and decoded by user B. The decoded symbols s (A,B),D are then demapped into s (A,B),M by using the same mapping function as the relay. Finally, user B can recover the signal transmitted by user A by subtracting its own data signal.
A. Complexity analysis
In this section we summarize the complexity analysis of schemes 1, 2, and 3. The decoding complexity in the three schemes is identical and it is equivalent to that of the conventional spread spectrum communication system. Hence, the complexity of the different schemes is evaluated by the number of decoding, mapping and modulation operations performed at the relay.
Let us assume that each user transmits a packet of P bits during the first phase of relaying. Hence, since the signals are superimposed in the wireless channel, the relay in scheme 1 needs P decoding operations, P mapping operations and P modulation operations. However, in schemes 2 and 3, the relay decodes the user signals separately after the first transmission phase. Hence, the required decoding, mapping and modulating operations required for these two schemes are 2P , P , and P , respectively.
Additionally, it should be noted that the user synchronization is essential for scheme 1, while it is not needed for schemes 2 and 3. Table III compares the complexity, the systems parameters of the proposed schemes and the number of required time slots to achieve an end-to-end transmission.
Apart from synchronization, it can be concluded that while schemes 2 and 3 have roughly the same complexity to achieve an end-to-end transmission, scheme 3 requires twice the bandwidth and less time slots as compared to scheme 2. Therefore, the choice between the two schemes is mainly based on the user requirements particularly in terms of time or bandwidth.
B. BER performance analysis of the frequency and time multiplexed network coding schemes for DCSK system
In this section, the analytical BER end-to-end expression is derived for schemes 2 and 3. In both schemes the signals are separately decoded by the relay during the first phase of relaying, hence the BER derivation methodology for a given wireless channel is the same for the two schemes.
We Consider the scenario where user A transmits its data to user B via the relay R. We will derive the BER performanc of user B. In this case, user B first despreads and decodes the received mapped symbols sent by the relay and then extracts its own data to recover the useful data transmitted by user A, as shown in Table II. In our analysis we assume equal transmit power for all nodes. In this sense, BER 1,A , and BER 1,B represent the bit error rates of the users A and B in the first transmission phase, respectively, and BER 2,B denotes the bit error rate of user B in the second transmission phase.
The bit error rate at the relay in the first transmission phase is the sum of two error functions. The first error function represents the situation in which there is no bit error detection from the user B, but an error in the detection of the user A's bit. Oppositely, the second situation represented in equation (8) by the second error function is when there is no error detection in the bit transmitted by user A to the relay, but there is an error in the transmitted bit by user B. Hence, the bit error rate at the relay can be represented as: Additionally, the end-to-end BER can be determined as sum of two possible scenarios as follow: (9) which yields the final end-to-end BER expression of Equation (10) requires the computation of three different BER expressions, BER 1,A , BER 1,B , and BER 2,B . To compute the end-to-end bit error rate expression for the first transmission phase of user A, the statistical properties of the appropriate decision variable must be determined. In this case, the decision variable for the i th bit transmitted by user A in the first phase might be given by Therefore, the decision variable can be formulated as where the first line denotes the useful signal and the other lines represent the multipath and the Gaussian noise due to the interfering signals.
The decision variable in this scenario follows a Gaussian distribution, because the sum of the product of two chaotic sequences is zero which eliminates the contribution of the second line in equation (12), i.e.
and all the other interference components follow the Gaussian distribution. Additionally, the channels coefficients are considered to be independent and the spreading sequences and the Gaussian noises are also independent. Furthermore, the noise samples are uncorrelated and independent of the chaotic sequences and the chaotic samples themselves are independent from each other. Therefore, it can be concluded that the terms corresponding to different signal interferences present in the decision variable expression given by equation (12) are independent.
Knowing that the chaotic and noise signals are zero mean, the mean of the decision variable for a given i th bit can be defined as and the total variance of the decision variable is the sum of the different interference components. where Based on the independent and uncorrelated characteristics mentioned earlier, the variance terms in equation (14) might be formulated as Equation (15) is obtained because the considered chaotic map has a normalized variance E[x 2 ] = 1 which makes the and Hence, the total variance of the decision variable given by equation (14) can be formulated as Therefore, the BER 1,A for user A during the first transmission phase can be obtained by which results in where erfc(x) is the complementary error function defined by Therefore, the instantaneous BER expression of user A is given by which can be simplified to A (λ 1,1,A , λ 1,2 If the largest multipath time delay is shorter than the bit duration 0 < τ 1,A << βT c , the ISI can be neglected compared to the interference within each symbol due to multipath delay. However, if the delay τ A increases, the ISI increases significantly. Hence, for large values of τ A the hypothesis of neglecting the ISI is not valid. The condition 0 < τ 1,A << βT c is commonly used in the literature [17,35,37], where the ISI can be neglected under this condition. Similarly, these studies have shown that for a large spreading factor we have In our work the variable C representing the ISI can be neglected, since C ≈ 0. Therefore, the BER expression in equation (24) may be expressed as where γ 1,A = (λ 2 1,1,A +λ 2 represents the signal-to-noise ratio (SNR) at the relay side. Letγ 1,1 andγ 1,2 denote the average SNR of the received signal at the relay, thenγ 1,1 = . For non-identical channel coefficients, i.e.,γ 1,1 =γ 1,2 , f (γ 1,A ) can be obtained by On the other hand, for identical channels, i.e.,γ 1,1 =γ 1,2 , f (γ 1,A ) is given by Eventually, the BER 1,A can be obtained by averaging the conditional BER 1,A as follows The BER expression for user B in the first transmission phase can be derived in a similar manner to that of user A, when considering communications over a multipath channel with different channel coefficients and different delay values for each user. Hence, the instantaneous BER for user B can be expressed as where γ 1,B = (λ 2 1,1,B +λ 2 represents the SNR at the relay. Finally, similar to user A, the final BER expression for user B in the first transmission phase is obtained by averaging the conditional bit error rate function as follows Similarly, the instantaneous BER expression for user B in the second transmission phase can be expressed as where represents the SNR at user B. Hence, the final BER 2,B expression might be formulated as Finally, the analytical end-to-end BER expression of the proposed schemes 2 and 3 over multipath channel can be obtained by substituting equations (29), (31), and (33) in equation (10).
C. Special case: One user is in a low interference zone and the other user is in high interference zone In this section we analyse the network performance for two special scenarios: the first is when user A is in low interference zone while user B is in high interference zone and the second scenario is when user B is in low interference zone while user A is in high interference zone.
In the first scenario, when the user A is in low interference zone and user B is in high one, user A is affected by AWGN only, while user B's signal is transmitted over a multipath channel in addition to the AWGN. Therefore, in this case the channel coefficients in the BER 1,A expression have the values of λ 1,1,A = 1 and λ 1,2,A = 0. Hence, the BER 1,A expression of equation (24) can be simplified as And then we can substitute equation (34) in equation (8) in order to get the end-to-end BER expression given in equation (10).
On the other hand, in the second scenario, when user B is in low interference zone, equation (29) remains the same and the two BER expressions of equations (31) and (33) can be expressed as
D. Throughput and link spectral efficiency analysis
We have shown in Section IV-B that the end-to-end BER performance is the same for scheme 2 and 3, when they experience the same channel profiles. In this section, we analyse and compare the throughput and link spectral efficiency of the two schemes. Typically, the effective throughput R t for a constellation size M is defined as the number of correct bits that a user receives per unit of time [46], which can be expressed as where T n is the time to exchange bits between the nodes, (1 − BER) denotes the correct bits ratio received within a period of time T n and M is the constellation size which is equal to 2 for DCSK modulation.
For the frequency multiplexing DCSK scheme 3, two time slots (T n = 2T b ) are required to exchange data between users. Hence, the total required time T n for transmission of a DCSK symbol can be formulated as In contrast, exchanging data between users in the time multiplexed DCSK scheme 2 requires three time slots (T n = 3T b ), which results in the T n being formulated as Therefore, in case of equal BER performance, the throughput in the frequency multiplexing DCSK scheme 3 is higher than that for the time multiplexing DCSK scheme 2. However, the bandwidth required for the two schemes is not the same and hence it is essential to analyse and compare the the link spectral efficiency of the two schemes. The link spectral efficiency Γ is defined as the ratio of the maximum throughput of the system and the used bandwidth of the data link and can be formulated as where Γ is the link spectral efficiency and W is the total user bandwidth.
For the time domain multiplexing scheme 2, the total bandwidth required is equal to the DCSK bandwidth, which results in where T c is the time chip.
On the other hand, the total bandwidth W s for the frequency domain multiplexing scheme is twice the DCSK bandwidth, i.e. 3 W s = 2W s and hence Consequently, the link spectral efficiency of frequency and time multiplexed network coding schemes for a DCSK system can be obtained as and where Γ f and Γ t are the link spectral efficiency for scheme 3 and scheme 2, respectively. Therefore, according to equations (36), (42) and (43), considering performance at the same bit error rate, the throughput of the frequency multiplexed scheme 3 is 1.5 times higher than that of the time domain multiplexed scheme 2. However, the link spectral efficiency of scheme 3 is 0.75 of that of the time domain scheme 2. Therefore, the system designer would have to choose between scheme 2 and scheme 3 according to their requirements.
V. PERFORMANCE RESULTS
In this section we will present the performance of the different proposed schemes for different system parameters, while comparing the proposed theoretical BER analysis with simulation results. ] denote the average power gains for different paths of the channels corresponding to user A and B during the fist and the second transmission phases. Additionally, τ A and τ B represent the delay spread during the fist transmission phase, while τ B is the delay spread in the second phase of relaying. Fig. 5 shows the Monte Carlo simulations and the theoretical end-to-end BER performance for Schemes 2 and 3, while communication over multipath Rayleigh fading channel using the system parameters summarized in table IV. As shown in the figure, the simulation results close follow the analytical BER expressions for all spreading factors, channel gains and delay spreads. The close match between the simulation results and the analytical curves validates the assumption of neglecting the ISI. Moreover, Fig. 5 shows the performance of the two specific scenarios studied in subsection IV-C, namely when one user has low interference level while the other user has high interference. The lower bound performance of this network is obtained when the user A transmits its data to user B, while user B is in low interference zone or when the user B transmits its data to user A, while user A is in low interference zone.
In this paper, we have not focused on the optimal spreading factor length for DCSK system due to the fact that this has been the subject of many research work conducted on the performance of DCSK systems for different spreading factors. For instance, the optimal spreading factor for a single carrier DCSK system was studied in [20] and for multi-carrier DCSK in [32]. Fig. 6 compares the simulated BER performance of the end-to-end BER for ANC-DCSK, PNC-DCSK system and the proposed schemes 2 and 3, when communicating over multipath Rayleigh fading channel. In these results, the spreading factor used is β = 25 in addition to the channel parameters listed in the table IV. The figure shows that the multiplexed network coding schemes 2 and 3 outperform the PNC-DCSK and the ANC-DCSK systems. This can be attributed to the strong signal interferences generated by the cross product of the different user data signals and the reference signal present in the decision variable for the PNC-DCSK and ANC-DCSK schemes. Additionally, it can be seen that the PNC scheme outperforms the ANC one since the relay in the ANC scheme amplifies and forwards the resultant noisy signal to the users, which increases the multipath and noise interference in the signal. Figs. 7 and 8 compare the throughput and the link spectral efficiency of the proposed schemes 2 and 3. The results in the two figures validate the obtained theoretical expressions, where the analytical results and the simulation results of the two schemes follow each other closely. As shown in these two figures, the frequency multiplexed coding scheme 3 offers 1.5 times higher throughput than the time multiplexed coding scheme 2, while its link spectral efficiency is 0.75 lower.
Eb/N0 [dB] Link spectral efficiency
Simulated link spectral efficiency for scheme 2 Theoretical link spectral efficiency for scheme 2 Simulated link spectral efficiency for scheme 3 Theoretical link spectral efficiency for scheme 3 Fig. 8. Link spectral efficiency comparison between the time multiplexed network coding scheme 2 and the frequency multiplexed network coding scheme 3 for β = 50.
VI. CONCLUSION
Three network coding schemes combined with DCSK modulation under a multipath fading channel were proposed and analyzed. In the proposed schemes, the users communicate through a relay node, where the relay decodes the received signals sent by different users and retransmits the mapped symbols. On the other hand, each user decodes the received symbols from the relay and recovers the data transmitted by the other user of its pair by subtracting its own data signal. The proposed scheme 1 corresponds to the design of a PNC-DCSK scheme, where the users transmit their data synchronously to the relay while sharing the same spreading code and bandwidth. We have presented the analysis of this system showing the existence of high power interference in the combined signal at the relay. Hence, in order to mitigate this interference, we proposed to separate the transmitted signals in the first phase of relaying, where we proposed two novel schemes, which we referred to as scheme 2 and scheme 3, based on time and frequency multiplexing techniques. The separation between signals in scheme 2, which is equivalent to SNC, is performed in time domain during the first phase of relaying. On the other hand, in scheme 3 we used frequency multiplexing to separate the two users, which requires twice the bandwidth of scheme 1 and 2. The performance of scheme 2 and 3 was analysed in different scenarios and the corresponding end-to-end bit error rate expressions under multipath Rayleigh fading channel were obtained, validated by simulations and compared to the PNC and ANC schemes. In addition, the throughput and the link spectral efficiency for the proposed schemes were derived and compared with the simulation results. Finally, according to the analysis of this work, the PNC-DCSK scheme 1 will be a potential topic to study as future work. In this sense, mitigating the interference in this system leads to a significant increase in the throughput of the system, and thus considerably improving its overall performance.
|
2015-05-12T03:39:17.000Z
|
2015-05-12T00:00:00.000
|
{
"year": 2015,
"sha1": "0c5f0609a09d88d5d804f04da5007d6658ac51a0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0c5f0609a09d88d5d804f04da5007d6658ac51a0",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
21183420
|
pes2o/s2orc
|
v3-fos-license
|
Biobanking—Budgets and the Role of Pathology Biobanks in Precision Medicine
Biobanks have become an important component of the routine practice of pathology. At the 2016 meeting of the Association of Pathology Chairs, a series of presentations covered several important aspects of biobanking. An often overlooked aspect of biobanking is the fiscal considerations. A biobank budget must address the costs of consenting, procuring, processing, and preserving high-quality biospecimens. Multiple revenue streams will frequently be necessary to create a sustainable biobank; partnering with other key stakeholders has been shown to be successful at academic institutions which may serve as a model. Biobanking needs to be a deeply science-driven and innovating process so that specimens help transform patient-centered clinical and basic research (ie, fulfill the promise of precision medicine). Pathology’s role must be at the center of the biobanking process. This ensures that optimal research samples are collected while guaranteeing that clinical diagnostics are never impaired. Biobanks will continue to grow as important components in the mission of pathology, especially in the era of precision medicine.
biospecimens and the lack of diversity among the source patients. 4,5 The Biorepositories and Biospecimen Research Branch (BBRB) at the National Cancer Institute (NCI) has issued best practice guidelines that provide excellent recommendations and templates for the collection, storage, and distribution of high-quality biospecimens. 6 Additionally, the BBRB web site has a Biobank Economic Modeling Tool available to help scientists and institutions develop financial planning and an understanding of the numerous resource costs. Table 2 provides a list of abbreviations that are frequently used in the biobanking literature. Furthermore, the best practices document recommends the development of a business plan that provides justification for institutional commitment and quantification of start-up and sustainability costs, including a formal continuity plan and emergency response (disaster) planning. 6 Finally, as a functioning core or center: quality, safety, service, customer satisfaction, and a sustainable revenue cycle should all be part of a biobank's mission.
It is well documented that these activities, critical to a functioning biobank, add a significant cost-burden to participants in the biobanking process. Collection of high-quality biospecimens usually requires extramural and intramural funding, both formal and creative, to fully allow a private organization or academic department to recover costs. 7,8 This section describes the biobanking business experience of a midsized pathology department in an academic medical center (Boston University School of Medicine and Boston Medical Center) that serves an underrepresented patient population at a private, nonprofit, safety net hospital.
Although the fundamental costs of establishing a successful biobank have been extensively described, 9 it is important to emphasize that the scope of the business plan should include the start-up phase, the operational phase, and the plans for cost recovery in addition to legacy planning. Considerable administrative effort must be invested in preparation of a business plan or Statement (scope) of Work. Additionally, the costs of hiring and training staff, identifying and engaging participating faculty (surgeons, nurses, pathologists, and information technology [IT] personnel), purchasing equipment and supplies, developing IT infrastructure to maintain data, and identifying appropriate space should all be considered. Operational startup costs include but are not limited to preparing and submitting an institutional review board protocol(s) with informed consent forms, possibly in more than one language; writing Material Transfer Agreement and Data Use Agreement documents; and building appropriate space with equipment and personnel space. Running a high-quality biobank requires maintaining and monitoring proficiency, similar to other functions in the laboratory. One method for monitoring proficiency is to develop a quality management system (QMS), if one is not already in place. A well-developed QMS includes standard operating procedures (SOPs), document control processes, emergency response and recovery plans, security of physical and virtual data, staff competencies, equipment maintenance and supply purchase records, and good laboratory practice (GLP) guidelines. A corrective action/preventative action system should be an integral part of the QMS, as well as setting up a system of internal and external audits to ensure the overall quality of the biobank. Accreditation in the College of American Pathologists (CAP) Biorepository Program 10 is highly suggested but adds another expense. Developing or buying the appropriate software to manage all data is an additional but necessary step in setting up a high-quality biobank. Elements of the IT system should allow for storage management, specimen annotation, and data security. Appropriate management of staff, including safety training, compliance and competency training, and maintenance of staff certifications would be required. The largest portion of the initial start-up will be in capital planning, followed by salaries and fringe benefits. 2,4,7,11,12 In the operational phase of a biorepository, the single largest cost will be salary for personnel. Hiring well-qualified staff who can perform more than one task is highly beneficial and a good investment. Biospecimen collection activities can be divided into the preanalytic, analytic, and postanalytic phases traditionally used in laboratory medicine. In the preanalytic phase, patients are screened and entered into the study. Significant time may be required to identify appropriate patients who have consented to donate biospecimens. Annotation of clinical data could occur in the preanalytical stage or in the postanalytical stage. During the analytic phase, biospecimens such as Table 1. Biobank Considerations.* What specimens will be banked (only tissue destined for discard or blood, saliva, etc)? Will the biobank offer additional services (ie, histology)? What services should be offered? How will the data be managed? Who are the institutional stakeholders? What are the ethical implications for the patients involved? *There are many questions that should be considered prior to starting a biobank. The list above is far from comprehensive. blood or urine may be collected before and after surgery, which could be used to extract either nucleic acid (DNA or RNA) or relevant biomarkers. Tissues need to be collected after surgery with optimized ischemic times, so working closely with the surgical staff and nursing is essential. Samples should be dissected by a board-certified pathologist or a pathologists' assistant and selected biobank samples ultimately preserved by freezing or formalin fixation and paraffin embedding (FFPE). The samples may also be processed for cell or tissue culture depending on the scientific requirements. Quality control (QC) examination of tissues is essential to ensure that tumor and/or normal adjacent tissue has been properly sampled by having a board-certified pathologist review the slides. Postanalytic responsibilities include sample storage either in freezers or in the vapor phase of liquid nitrogen (LN 2 ), distribution and shipping of samples, as well as maintenance of databases with annotated demographic and clinical history. Another critical aspect may be the need to change the annotation of the sample in the event that the original diagnosis changes. A biobank may also wish to consider accepting data from investigators back into the biobank to expand the utility of the biospecimen. Significant resources should be budgeted to ensure data security to make sure adherence to the Health Insurance Portability and Accountability Act of 1996 for protection of human patients. One of the most important roles of the biobank is to act as an honest broker by being the link between clinical and research activities. Expanding the role of the IT software to allow internal clinical systems to speak with the biobank's database and allow for automatic deidentification in fulfillment of the honest broker's role is an important aspect to consider. Continual management of the data through this IT system will not be static and will need frequent updates to stay current with the clinical systems as well as to update data definitions as those evolve. 13,14 Ongoing administrative duties might also include regular report preparation, presentations on project progress, managing site visits, and accreditation inspections. Time and money should be spent on internal audits to assure the quality of the specimens as well as to assess the overall usage of the biobank. 3 One of the key aspects to consider in setting up a biobank is that the goal should not be how many specimens can be collected, but what are the needs of the institution so that there is a high turnover of samples. 3 Additional requirements when collecting data include considering the ethical and legal processes of sample collection. In collaboration with the University of New Mexico, University of Pittsburgh, Emory University, and the NCI biospecimen preanalytic variables (BPV) study, the Boston Medical Center developed an ethical, legal, and social implications questionnaire that was subsequently implemented. 4,15 This allowed biospecimen donors to have a voice in the process of biospecimen collection by completing the questionnaire.
In terms of assessing expense, the time required for a single patient to go through the process (informed consent, donation, and responding to a follow-up questionnaire) required about 3 weeks from consent to final assessment. The final assessment was used to determine whether the specimen met the required collection needs. Securing funding for technicians, biobank managers, quality managers, quality directors, pathologists, IT personnel, and of course the principal investigator is essential in this portion of the biobank's life cycle. As with other aspects in pathology, a QMS and formal quality improvement (QI) plan should serve as the foundation for a successful program. Strategic budget planning for replacing major equipment must be in place to avoid just-in-time emergency requests. A 7to 10-year depreciation plan should be considered with a view to replacing large equipment such as freezers and fridges, LN 2 storage tanks, temperature monitoring and alarm systems, centrifuges, and key histology equipment. Contingency emergency response planning for major equipment failure must also be considered; a written plan for rapid relocation of biospecimens should a freezer fail is essential to avoid confusion and loss of valuable specimens. Ideally, a backup LN 2 storage tank and À80 C freezer that has capacity should be maintained near to the core biobank.
Cost recovery should be the final part of the business plan to ensure the biobank will be a sustainable resource for the institution. Initial funds will typically come from donors and substantial institutional support. This will generally sustain the biobank through the first 3 years, which has been shown to be the typical start-up phase. 2,12 After this phase, other means of support will need to be found. Grant funding is one source, but grants are not always a consistently reliable revenue stream for core facilities. 12 Contracts with governmental agencies or other tissue repositories tend to provide a more reliable source of funds, but adherence to procurement protocols will necessitate more administrative initiatives to assure compliance. 16 Finally, a fee-for-service schedule can be generated; however, extensive thought should be given to determine: (1) What will internal researchers be willing to pay? (2) Will there be a different pricing schedule for an internal versus external customer? (3) How will the different services be priced? (4) Can training and education be another service offered? These are among some of the questions that would need answers. 16 Most likely, a combination of the 3 types of funding will be needed to sustain a successful, high-quality biobank, but with an initial plan in place and yearly review of the plan, a sustainable biobank should be attainable.
Identifying other revenue streams to support the biobanking mission including developing a histopathology core, tissue microarray, nucleic acid preparation, or immunohistochemistry services offers funding opportunities. Digital scanning of slides may also be offered on a fee-for-service basis. The Moffitt Cancer Center, Tampa, Florida offers an excellent model of Total Cancer Care that has greatly benefitted patients, research partnerships, and the biobanking core. Partnerships with the pharmaceutical industry and industry-sponsored clinical trials support the mission of the Moffitt Biobank and leverages biomarker assay development for improved care. Collaborations with epidemiologic or public health studies can also lead to nontraditional funding streams. Finally, major philanthropic gifts can be solicited with institutional support. Focusing on the connectivity between collecting high-quality biospecimens and translational research science often captures the attention of donors, particularly if they or a family member have had a particular disease.
As mentioned previously, legacy planning is a key strategic concept to bear in mind should extramural funding decline. Establishing an approved core service at an academic institution may provide bridge funding as support. Importantly, senior leaders at the organization should be educated by biobanking leadership as to the costs and resources required to run the service well in advance of any request for funding. Intramural funding may also be available in small grants. Demonstrating and documenting a track record of excellent service that supports other investigators will improve the likelihood of institutional support. If the biobank is department based, appropriate budgeting can keep a biobank stable at low cost. Commercial entities and "tissue brokers" as well as industry clinical trials can provide steady revenue though an increase in administrative burden should be anticipated.
Budget Considerations in Consortium Settings
As already discussed, biobanking is an important, but expensive, research infrastructure at academic medical centers. Covering the expense of biobanking and allied processing/analytic services usually is multifaceted. While charging fees to investigators is fairly common, realistic fee structures that investigators are willing or able to pay are seldom sufficient to cover the costs of required biorepository personnel and equipment. Thus, institutional subsidies are generally required to cover costs of biorepository functions not covered by user fees. Some of these costs may be covered by funding as "core facilities" for extramurally funded research projects, typically either as part of large research center grants or as part of multi-investigator consortia groups. Indeed, having documented biorepository facilities are often prerequisites for such large grants, hence it is in an institution's best interests to provide sufficient funding to ensure the presence of robust biobanking infrastructure.
Another strategy to leverage existing biorepository infrastructure, and to utilize extra capacity that may be present in biospecimen resources, is to pursue funding targeted for biospecimen-specific activities. Access to high-quality human biospecimens is recognized as a national research need that is often not met by local resources. To address this issue, national research funding agencies have occasionally created programs that are specifically designed to lower the barriers to obtain human biospecimens for the general research community or have searched for partners to provide biospecimens for specific research programs. Some examples of this are The Cancer Genome Atlas (TCGA), 17 Biospecimen Preanalytic Variables (BPV) program, 18 and the Office of Clinical Proteomic Tumor Analysis Consortium. 19 The experience at the University of Virginia (UVA) School of Medicine serves as an example of how to work in a consortium. Biorepository functions reside in a core facility named the Biorepository and Tissue Research Facility (BTRF). The BTRF and allied biospecimen programs employ 10 full-time employees (2 faculty-level managers and 8 technicians) with an annual budget of approximately 1 million dollars. The BTRF covers approximately 24% of its budget from local user fees, receives 18% of its budget for its support of Cancer Center activities and receives a subsidy from the School of Medicine of approximately 13% of its budget. The remaining 45% of its budget consists of support obtained from its extramural activities.
Two major grants make up the bulk of the BTRF extramural activity. The first is a competitive cooperative agreement (UM1) grant from the NCI to serve as the Mid-Atlantic division of the Cooperative Human Tissue Network (CHTN). The CHTN consists of 6 divisions that work together to fulfill research requests for human tissues and biofluids and acts primarily as a prospective procurement service. Although the CHTN maintains uniform SOPs that govern specimen collection and QC, unique procurement services are being utilized at each site, so that the labeling, packaging, and associated services will vary to a degree. In addition to biospecimen procurement, the BTRF is the major manufacturer of tissue microarrays for this consortium.
The second major extramural grant that the BTRF supports is a competitive award from the Congressionally Directed Medical Research Program of the Department of Defense to host the Lung Cancer Biospecimen Resource Network (LCBRN). 20 The LCBRN is a consortium of 3 institutions that recruit lung cancer patients undergoing cancer resections to donate tissue, bronchial lavage fluid, blood, saliva, and urine samples. Biofluids are collected preoperatively and at intervals postoperatively. Patients are followed continually to obtain clinical follow-up data as well as serial biofluid samples. The UVA serves as the coordinating center of the LCBRN and provides collection kits to all collection sites, so all samples are collected under uniform SOPs and are uniformly labeled and packaged. All biospecimens are sent to UVA for central histology, QC, and storage. Investigators interact with the coordinating center at UVA to submit applications to receive biospecimens and annotated clinical data.
Although such extramural funding is not specifically designed to support local biorepository efforts, the local efforts are bolstered by the ability to cost share personnel, informatics infrastructure, and equipment with such programs. An intangible benefit is the additional expertise that local biorepository personnel develop by interacting with an expanded number of scientific investigators and with biorepository personnel at institutions within such national networks. As with any academic endeavor, the ability to compare and discuss different approaches to procurement techniques, quality measures, and regulatory requirements leads to the exchange of ideas that can improve local practices and stimulate creative problem solving.
Pathology-Centered Biobank
There are several advantages to establishing a biobank managed by pathologists. First, all specimens are in a single location, allowing investigators a single source of biospecimens.
Second, cost savings may occur through centralization. Finally, and perhaps most importantly, a pathology-centered biobank ensures that the tissue necessary for diagnostic work has the highest priority so that no patient is harmed. Such a biobank was established at Duke University in 2012. The Biospecimen Repository and Processing Core (BRPC) and its broad consent protocol were established in the Department of Pathology at Duke University with investment from the Duke Cancer Institute and the School of Medicine. Under the universal consent protocol at Duke, thousands of patients have now given permission for the storage and future use of their excess clinical specimens annotated with their clinical information. Most of these participants also opted to donate an additional blood sample.
The concept of "governance" in biobanking not only describes the responsible custodianship of the biospecimen collection within the physical biorepository but also includes oversight of specimen utilization and contingency planning for collection maintenance if funding loss requires infrastructure decommissioning. The novel disease-based group (DBG) paradigm for governance has been vital to BRPC's success by increasing investigator buy-in and engagement. In the DBG model, governance is shared between the direction and oversight responsibilities of the DBG and assigned physical custodianship of the BRPC. A multidisciplinary subcommittee within each DBG is responsible for approving the distribution of limited samples (eg, frozen tissues). In addition, it is clear that disease-specific biospecimen collections (having been created through shared and central investment) do not leave the institution if a single prominent investigator departs. In the event of defunding or decommission of the biorepository, the disease-specific collection would be transferred to the custody of DBG leadership.
In partnership with BRPC, the DBGs donate the effort of their clinical research staff to recruit and enroll patients onto the BRPC's broad consent protocol. These research nurses and coordinators become key personnel on the protocol and often coconsent in combination with appropriate clinical trial(s). Since 2012, there have been over 90 clinical research staffs across multiple disease groups trained to administer broad consent. Combined with the efforts of 2 dedicated consenting staff members within BRPC, this model has allowed considerable accrual. Upcoming endeavors at Duke, including e-consent, are expected to further expand participation.
Because the principal investigator for the broad consent protocol is the pathologist-director of the BRPC, the BRPC takes responsibility for training all Duke clinical research staff who administer the broad consent. Consent training sessions include an hour of in-person didactic teaching as well as multiple observed consent events. Training includes references to biobanking publications geared toward laypersons such as the NCI's brochure "How you can help medical research: donating your blood, tissue, and other samples." 21 Staff are taught that excess tissue procurement occurs only in partnership with the pathologist in the surgical pathology suite. This understanding helps consenting staff reassure patients that their medical care will not change if they elect to participate.
Some may question the value of procuring fresh and frozen tissue samples at a time when more and more molecular tests can be performed on archival formalin-fixed paraffinembedded tissues. [22][23][24] However at Duke, one of the most valuable BRPC activities is the distribution of fresh tissue aliquots to research laboratories whose work absolutely requires fresh tissue. This includes laboratories for immune cell profiling, cell culture, and patient-derived xenograft creation. In these cases, the resulting models and data are all united by a single BRPC ID, allowing them to be used together in research representative of a single patient's disease.
Financial independence and sustainability of biorepositories is a lofty, and many say unattainable, goal. 25,26 Indeed, the BRPC receives yearly subvention from both Duke Cancer Institute and Duke University School of Medicine. Additionally, the BRPC has developed mechanisms for improving cost recovery aimed at earlier engagement of disease groups in clinical trial preparation (to allow for proper budgeting) and improved remuneration for pathologist and administrative time when processing archival tissue requests. The BRPC also received some cost recovery by providing samples to TCGA. 17 Currently, 48 federal and industry-sponsored clinical trials are receiving services through BRPC at different rates of cost recovery as required. As noted in section "Biobanking Budget Considerations", true cost recovery is attainable in the support of industry-initiated clinical trials.
Finally, it should be mentioned that the BRPC received biorepository accreditation from the CAP in 2013 and maintains this high standard of safety, quality, and reproducibility in all operations. The biorepository accreditation program by the CAP continues to grow in popularity, with 47 US biorepositories now accredited. 27 A large portion of the CAP requirements were recently incorporated into the NCI's best practices for biorepositories. 28 At Duke, CAP accreditation allows BRPC to work in partnership with Duke Clinical Laboratories and Anatomic Pathology for tissue procurement, discard blood procurement, and archival specimen retrieval. The leadership BRPC and Pathology have shown in education, quality, and investigator engagement have established the Department of Pathology as the "home of biobanking" at Duke.
Science-Driven Biobanking
High-quality human samples for research are key for personalized precision health care of the future. The initiative has taken on increased importance with the President's declaration of the Cancer Moon Shot 29 in 2016, and several organizations are rising to the challenge. Some groups have already put in place the appropriate infrastructure to provide high quality biospecimens. In 2015, a major new initiative at Memorial Sloan Kettering Cancer Center was launched by creating the Precision Pathology Biobanking Center (PPBC), which is being built around 5 highly interconnected pillars (Figure 1).
When designing the PPBC's specimen acquisition, preservation, storage, and distribution workflows, the concept of "future proofing" was front and center: all samples (tissues, bloods, and other liquid samples) are rapidly procured (time to freezing <15 minutes) and are uniformly held in vapor-phase LN 2 rather than À80 C freezers. It has been convincingly shown that some of the most interesting components of the pathophysiome (RNA, posttranslational modifications of proteins, and small metabolites) degrade unpredictably even at À80 C, while they remain stable in vapor phase LN 2 (À180 C). 28 The PPBC banks biospecimens from approximately 7000 new patients with cancer per year, including surgical resections, interventional radiology biopsies, and companion blood and body fluid collections. Diseased and matched normal tissues are frozen in LN 2 without further additives and spatially indexed FFPE blocks are prepared that match each sampling location of a corresponding frozen vial. The availability of carefully matched FFPE blocks has numerous advantages, including (1) morphologic quality assessment of each mirrored frozen tissue (without need for freeze-thaw cycling); (2) availability for research (independent of frozen samples) for sequencing, immunohistochemistry, Fluorescent In Situ Hybridization (FISH), or other assays; (3) dedicated "clinical trial blocks" and "tissue microarray blocks"; and (4) source of routine hematoxylin and eosin (H&E) slides that are whole-slide scanned prospectively as a digital representation of each sample in tissue biobank ("digital biorepository"). Highest quality for the research FFPE blocks is maintained by consistent annotation of warm and cold ischemia times, fixation chemistry and time, and block processing protocols. The FFPE processing can be separated or embedded in existing clinical [Clinical Laboratory Improvement Amendment (CLIA)-grade] workflows depending on volumes and institutional preference. We prefer to include research FFPE processing into the existing clinical stream because it makes the blocks a priori CLIAcompliant (advantageous for regulatory submission of research results, such as FDA), and it affords cost savings (no need for duplicate FFPE processing equipment line). Blood (obtained both pretreatment and posttreatment) is processed into frozen serum, plasma [double centrifuged for use as source of cell free DNA (cfDNA)], and buffy coat aliquots. More than 30 000 specimen units are created annually. There are more than 1600 units of frozen samples and 1000 units of FFPE material available for immediate research. Additionally, a rapidly growing portion of PPBC activities (about 1700 fresh samples) involves "living" biobanking (ie, the creation of organoids, mouse xenografts, primary cell lines, etc). The PPBC's biobank has developed innovative QI metrics and processes, including RNA integrity monitoring in sentinel samples and participation in international proficiency testing programs (eg, International Society for Biological and Environmental Repositories Integrated BioBank of Luxembourg Proficiency Testing Program). 30 Informatics may provide an important component of future biobanks since a physical repository of biospecimens is only as good as the level of annotation and knowledge that can be associated with each and every specimen in the bank. Pathology as a discipline will increasingly evolve into the medical specialty of dynamic data management and big data integration to drive patient care ("theranostics"), rather than the status quo of "just" providing static diagnoses. Translated to biobanking, this means that we need to create tools that cross-reference physical samples in real time to all other data that may be known about a patient (eg, clinical status, therapeutic status, imaging results, clinical trial participation, molecular features of the disease, etc). Building a longitudinal representation of every patient (from diagnosis through stages of treatment to follow up/recurrence, etc) in which all physical samples are mapped onto a common time line together with all other observational or interventional medical events creates this approach. For example, the question may be asked: how many frozen research samples (containing primary, metastatic, and/or normal) does the bank hold from patients born after 1960 with a diagnosis of KRAS-mutated colon cancer? These searches have become instrumental tools for feasibility arguments in grant submissions and hypothesis generation for numerous biomarker studies, as highlighted in the section on working with consortiums.
Delivery of health care is currently at the beginning of a wave of major new disruptive technologies that will become part of our diagnostic and theranostic tool set. With next generation sequencing reaching technological maturity in clinical laboratories, we have already seen new technologies (such as mass spectrometry-based deep proteomics, functional assessment of pathway activities, metabolomics, highly multiplexed immunofluorescence, ex vivo living models of drug response, etc.) that hold the promise of changing the way we will assess and monitor disease. 31 For example, mass spectrometry is currently being assessed as a highly quantitative and multiplexed tool (assessing several thousand proteins in tissue in parallel) that can complement or even replace conventional immunohistochemistry (no antibody requirements, mutations, and posttranslational activation states such as phosphorylation are directly detectable, etc). Importantly, many of these new technologies require highest quality frozen biobanked or fresh living samples while conventional FFPE-based clinical archives are simply not usable or highly suboptimal.
Pathology has historically not been a driver discipline in clinical trials or clinical drug development, with its role often limited to providing slide review for patient enrollment or sending FFPE material to third-party trial sponsors. In the era of what may be called "specimen-centered, molecularly driven" clinical trials (eg, basket trials like NCI Molecular Analysis for Therapy Choice 32 ), our discipline is becoming an active player in clinical trials and drug development. The PPBC trials division provides a dedicated platform for pathology's representation already at the earliest stages of new trials (including design, protocol writing, budgeting, direct discussions with sponsoring pharma, specimen acquisition, companion diagnostic development, etc). For example, at Memorial Sloan Kettering Cancer Center, a dedicated phase 1 biobank for patients on first-in-man clinical trials has been created that provides a priceless resource for research.
The combination of comprehensive biobanking and new technologies provides a natural and externally visible infrastructure that now allows pathologists to engage directly with the biotechnology and pharma sector to carry out joint R&D projects, such as development of new companion diagnostics, evaluation of biomarkers, or the use of new instrumentation. Such projects frequently also hold the opportunity for intellectual property generation. In addition, funding raised through research and commercialization can represent a major contribution to the long-term sustainability of research biobanking.
Conclusion
Biobanking offers opportunities for pathologists to actively engage in multiple new research and diagnostic initiatives. A strong case can be made that biobanking should be housed within pathology to ensure optimal specimen procurement without compromising the patient's diagnostic material. There are several important fiscal considerations that must be considered in creating a sustainable biobank that serves the needs of our patients and fellow investigators.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported in part by NIH grants AA022122 (DGR), AI112887 (DGR), NCI contracts BPV HHSN261200800001E (CDA), and CPTAC HHSN261201600011I (CDA).
|
2018-04-03T04:53:22.545Z
|
2017-05-04T00:00:00.000
|
{
"year": 2017,
"sha1": "d158e887f4dc6ecf0aae5dfec52d29a995c671eb",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2374289517702924",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d158e887f4dc6ecf0aae5dfec52d29a995c671eb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
135063977
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Physico Mechanical Study of Cements CEM II 42.5R in Cameroon: Case of DANGOTE and CIMENCAM
The present work deals with a comparative study of the physical-mechanical characteristics of different types of cement CEM II 42.5R produced and used in Cameroon. Indeed, the recent policy of integration and promotion of products in the sub-region has allowed some manufacturers (04) to settle in Cameroon and propose their products that characteristics are most often ignored by consumers. In order to handle this work correctly, we have conducted several tests on cements CEM II 42.5R of CIMENCAM and DANGOTE. These trials were carried out on fresh concrete, mortar, and also on a normalized cement paste. The present study mainly shows the specificities and the characteristics of CEM II 42.5R cement from other brands.
Introduction
Cements are hydraulic binders made of fine powders which, if they are added to water, they form a paste that is able to hydrate and gradually harden after a longer or shorter time [1].They are in fact composed of anhydrous, crystalline or vitreous constituents, essentially containing silica, alumina and lime.Moreover, the hardening is due to the hydration of certain components, mainly silicates and calcium aluminates; the proportion of lime and reactive silica to be at least 50% of the cement mass [1] Cement has been used for millennia: in ancient Egypt, it was a plaster mortar that bounded the stones.Chinese or the Mayas also use to build using limebased mortars, obtained by baking limestone rocks: this is the basis of the cement still manufactured today.Over the centuries, cement has been perfected.
The Romans used lime which is reinforced by volcanic ash (pozzolana) to make their mortar this was then able to take under water.The empirical methods were perfected later on with the theory of hydraulicity and described the necessary proportions of lime and clay to produce cement by baking.From this, the industrial production of cement can start [4] [5] [6].In 1824, Joseph Aspdin, a Scottish, improved the "recipe" and created Portland Cement.But it is in France that the Polytechnic Pavin de Lafarge settled his first lime kilns at Teil, in Ardèche, in 1833 [4].
The cement industry now is available to the user with large numbers of brands and specific areas of employment.The wide range of compositions, strengths, setting velocity and hardening combine with the requirements for the construction of buildings or civil engineering structures [6] [7] [8].However in Cameroon, the cement plant has grown exponentially.So in addition to CIMENCAM, we have seen the establishment of DANGOTE, MEDCEM, and CIMAF.Each cement plant has a specificity in the mineralogical and chemical composition of the cements manufactured according to the geological context, the atmospheric conditions and the need for performance [4].
Cameroon cement plants produce, several types of cements including CEM II 42.5 R.So the composition varies from one cement plant to another.There are advantages solicited following the significant demand of the builders for the realization of works of civil engineering and real estate.The success of a structure depends on the mastery of the characteristics of the cement, the inputs (aggregates and water) and the atmospheric conditions [1]- [9].However, it has been found that manufacturers sometimes neglect these parameters during the construction of structures, particularly the characteristics of cements [7]- [13].Fig- ure 1 shows the cement manufacturing process.
The purpose of this study is to present the specificities of CEM II 42.5 R cement from each cement plant (Case of CIMENCAM and DANGOTE) and the areas of use that arise from these properties.The first part will be devoted to the generality of the cements, as well as to the materials and methods used.And the second part will be dedicated to the experimental study, namely a comparative study of CEM II 42.5 R cement produced by CIMENCAM and DANGOTE, followed by an analysis of the results and by the recommendations.
Materials and Methods
This part is devoted to materials and methods.We will begin by presenting the characterization of materials that will be summarized in the presentation of the general properties and the mechanical and physical characteristics of the materials used.Finally we will present the experimental devices that allowed us to test Granulometric analysis makes it possible to characterize the aggregates by determining the size of the grains which constitute them, and the percentages of the grains of each size.It is carried out with sieves (Figure 2) according to standard NF P 18-560 [14].In addition, it will be necessary to avoid confusing grain size, which is the science that determines grain size and granularity, which is a dimensional distribution of grains of a granulate [5].
The studied material is placed at the top of the sieves and the classification of the grains is obtained by vibration of the sieve column.At the end of the test, we draw the particle size curve which gives the respective weight percentages of 2) and (3).Finally, the fineness module (Mdf) for sand [15]- [20] is calculated.
( ) D 10 , D 30 , D 60 represent the diameters of the elements corresponding to 10%, 30% and 60% of cumulative sieve, respectively.When we have: Cu < 2, the particle size is said to be uniform Cu > 2, the particle size is said to be spread A sand is well graduated when one has: 1 ≤ Cc ≤ 3 [27] ( ) { }
Sand Equivalent to 10% Fines
The equivalent of sand is used to measure the cleanliness of the sand and to detect the presence of fine elements in this sand.It is carried out according to the standard NFP 18 -598 [15]- [21].It is the ratio multiplied by 100 of the height of the sedimented sandy part, at the total height of the flocculate and the sandy part [15].The test is to flocculate the fine elements of sand suspended in a washing solution and then, after a time of rest, to measure the height of the sedimented elements.
Figure 3 shows the device used to perform the aforementioned test.
Resistance to Fragmentation
In order to verify the resistance to fragmentation, the Los Angeles test was carried out according to the NF EN 1097-2 standard and is intended to measure the resistance to shock fragmentation of a sample of aggregates.Indeed, it is expressed by the coefficient Los Angeles (L) which is the ratio between the mass of the refusal to sieve 1.6 mm of the sample after passage in the apparatus (Figure 4) intended for the test on mass initial [7] [18].The coefficient L is determined as follows: With.P: mass of sieve passing 1.6 mm (g), M: mass of materials collected (g), m: mass of dry refusal (g).
Wear Resistance: Micro-Deval Test
The purpose of the micro-Deval test is to verify under standard conditions the input of the aggregates produced by mutual friction in the presence of water and an abrasive load in a rotating cylinder.This test is carried out according to standard NF EN 1097-1.Indeed, after abrasion and depoits in a rotating cylinder arranged on a device as shown in photo 4 (12,000 revolutions in 2 hours), the mass m of the elements less than 1.6 mm produced, in the presence of water and expressed by the micro-Deval coefficient (M DE ) [7] [19]-[24].The M DE coefficient is determined as follows: 100 With P: passing mass of sieve 1.6 mm (g), M: mass of materials collected (g), m: mass of dry refusal (g).
Specific Density
The specific density is the mass of the unit of absolute volume of the body, that is to say, the material that constitutes the body, without taking into account the volume of the voids [2].It varies from 2900 to 3150 kg/m 3 depending on the type of cement [10].It is measured in two ways: using the graduated cylinder or using the pycnometer.Indeed, the test specimen method is very simple, fast and uses standard laboratory equipment (graduated test tube) except that it is of low precision [7].In addition, the pycnometer test is carried out according to the EN 1097-6 standard.The density of the aggregates (sand and gravel) used was measured using the graduated cylinder (Figure 5(a)) and the cement using the glass pycnometer (Figure 5(b)) [7] [20]- [27].
Measurement of Finesse
The fineness of cement is generally expressed by its mass area.This is the total area of the grains contained in a unit of powder.The objective of the test is to assess the surface area of the cement grains per gram of powder [7].This surface is expressed in cm 2 /g.The test for the determination of the fineness of the cement is carried out according to the EN 196-1 standard.The mass surface of the cements studied is measured by comparison with a reference cement that surface area is known [8] [14]- [25].The equipment used to carry out the test is called the "Blaine Permeabilimeter".Indeed it will be to pass a known volume of air through cement powder.The larger is the surface of this powder, the longer it takes for the air to pass through the powder.
Consistency Measurement
The consistency test of the dough makes it possible to characterize its greater or lesser fluidity.The following standardized tests make it possible to appreciate Figure 5. Graduated cylinder and pycnometer for the measurement of specific gravity.
this consistency: • Consistency test carried out with the Vicat apparatus according to EN 196-3 • The cone flow test according to standard NF P 15-358 [25].
As part of our work, we evaluated this consistency from the Vicat consistency.
Indeed it consists in measuring the depression of a cylindrical rod under the effect of a constant load in the dough.Indeed, it is the distance "d" between the end of the probe and the bottom of the mold that characterizes the consistency.
Measurement of Setting Time
The test is carried out on a paste of standardized consistency and is made according to the EN 196-3 standard.The objective of the test is to define for given cement a time that is significant for the speed of setting.It is the evolution of the consistency of a paste of standardized consistency that will be followed by the test.The apparatus used is that of Vicat equipped with a 1.13 mm diameter needle as shown in Figure 7.It is said that the "setting start time" is reached when under the effect of a load of 300 g, the needle stops at a distance "d = 4 mm ± 1 mm" from the bottom of the mold.In addition, the "end of setting time" is the one at the end of which the needle is no more than 0.
Slump cone of Abrams or Slump Test
Abrams cone collapse is carried out according to standard NF EN 12350-2 and is very often used because it is quite easy to implement [7] [8] [23].It consists of molding in a truncated cone whose diameter of the base is 20 cm, the top is 10 cm and height 30 cm of the concrete.The cone is filled in three layers each poked 25 times with a metal rod 16 mm in diameter with rounded ends.Then gently lift the mold and measure the slump immediately after.
The mold is then gently raised and the slump or slump measured immediately afterwards (Figure 8(a)).
Making Concrete Specimens
The concrete is first kneaded in a concrete mixer or kneader, and then molded in molds 16 cm in diameter and 32 cm in height (Figure 9
Measurement of Finesse
The fineness of a cement is generally expressed by its mass area.Table 1 presents the results obtained following the fineness test on the two different cements studied.
Figure 11 shows the results obtained following the fineness test on the two different studied cements The finer the value of a cement, the finer the grinding.We can see that CIMENCAM has a greater grinding fineness.Note that the finer the milling of a cement, the higher the speed of hydration reactions and the higher the mechanical resistance at young ages (2 to 7 days).On the other hand, cement is sensitive to ventilation and large shrinkage [7].
Consistency Tests
Table 2 shows the consistency in % of the various cements studied.
Table 1.Blaine fineness of the cements studied.From the results obtained, we find that CIMENCAM needs a large quantity of water to reach the consistency required by the EN 196-3 standard.
Setting Time
We talk about the setting of cement when the paste of cement, mortar or concrete loses its plasticity.In the course of our work, we have determined these different times on the various cements used (see Figure 13).
It was on the standardized cement paste made during the consistency test that any.Thus, the setting time is faster for cement when it has great fineness [7] [8] [18]- [27].In view of the results obtained, we can see that CIMENCAM has a setting start time lower than that of DANGOTE is ground more finely.In addition, the results we obtained are consistent with those of Blaine finesse.Indeed, the finer a cement is, the greater its specific surface is, the shorter is its setting time and the faster it flows.However, since DANGOTE has a more advanced setting time than CIMENCAM, it also has the advantage of less cracking and less shrinkage [1].
Abrams Cone Settlement of Fresh Concrete
The histogram presented in Figure 14 shows the value of the slump measured on concretes made from the various cements studied.
Density of Fresh Concrete
The fresh density is measured in order to be able to verify the density calculated theoretically.The fresh densities of the different types of cement studied are shown in Figure 15.
The density theoretically calculated following the formulation that we made, allowed us to obtain 2.99 t/m 3 .Thus we can see that the fresh concrete densities obtained are different and below the theoretical.The characteristics of the studied cements and the mechanism of hydration can explain this difference.It should also be noted that the phenomenon of contraction of materials may have an influence on this difference.Indeed, this measure should make it possible to make corrections on the proportions of the constituents of the concrete.
Semi-Wet Density
Mass volume is defined as the mass of a body per unit volume.During the cooking period, we measured the density of the test specimens at different ages, as shown in Figure 16.
During our study, we observed that the concretes studied have densities ranging from 2 t/m 3 to 2.5 t/m 3 .Indeed, the more the maturation evolves, the more the density increases.This is due to the fact that immersed in the maturation pond, concretes become saturated as and when they mature.
Compressive Strength
The compressive strength on the concrete specimens makes it possible to determine sand and gravel with the cement paste) of the concrete and the class resistance of cement [7].Note that between the 3 rd day and 7 th day, we notice that the resistance has hardly changed and then from the 14 th day begins again to grow until 28 th days.Indeed, it is 35.3MPa (increase of 32 MPa of 15% taking into account the environment and the quality of the aggregates used to make the concrete) that we hoped to have at least at the outcome of our calculations.
Recommendations
At the end of this analysis, it was discussed to make a comparative study between the characteristics of DANGOTE and CIMENCAM CEM II 42.5R cements.We remember that both have good characteristics (meeting the standards).Nevertheless, each of them presents a specificity in the face of a specific task: • CIMENCAM has a fine particle size and a good resistance in short and long run young and long term, so it is recommended to use this cement for work requiring high strength such as beams, poles.• On the other hand, DANGOTE, has a slightly coarser grain size, it is less exposed to shrinkage and venting, so it is recommended to use it for finishing work such as wall and floor covering, prefabrications.Filling element (cinder block, Urdis), the realization of the decorative elements.
Conclusion
Determining the physico-mechanical characteristics of the CEM II 42.5 R cements used in Cameroon was the main objective of our study.For this purpose, we have carried out various tests on fresh and cured concrete, standard mortar and standard cementitious paste.Examples include cement setting tests, bending and compression tests on hardened concrete.The study was carried out from two cements encountered in Cameroon namely: CIMENCAM and DANGOTE.We stopped at 28 days which is the number of days at which the reference resistance to compression of concretes or mortars is determined.Hence the values of the 28-day compressive strengths on the mortar specimens are as follows: CIMENCAM (54 MPa) and DANGOTE (45 MPa).Thus we have been able to determine that CEM II 42.5 R cements can be used in masonry for reinforced and stressed concrete that is to say for standard structures requiring high resistance.The results shown in our paper present in a non-exhaustive way, a general idea about the different types of CEM II 42.5 R cements used in Cameroon.
Figure 2 .
Figure 2. Series of sieves used for sieve size analysis.
Figure 4 .
Figure 4. Apparatus for los angeles trial.
Figure 7 .
Figure 7. Vicat apparatus equipped with a 1.13 mm needle for the setting test.
(a)).Finally the molds are put on a vibrating table for tightening the concrete.Moreover, the tightening time depends on the consistency of the concrete.The standard NF EN 12390-1 is the one that regulates the dimensions of molds.The test pieces must remain in the mold and be protected against shocks, vibrations and desiccation for at least 16 hours and up to 3 days at a temperature of 25˚C ± 5˚C in hot countries and 20˚C ± 5˚C[22].After demolding, the concretes are kept in a maturation basin (Figure9(b)) until the expected day of crushing[22] [23] [24][25].
Figure 8 .
Figure 8. Measurement of subsidence and cone of Abrams and its accessories.
Figure 11 .
Figure 11.Graph showing the fineness of Blaine of the cements studied.
Figure 12 presents
Figure 12 presents the results obtained following the consistency test on the two different cements studied.
Figure 12 .
Figure 12.Graph showing the consistency of the cements studied.
Figure 13 .
Figure 13.Graph showing the setting time of the studied cements.
10 Figure 14 .
Figure 14.Graph showing the slump of fresh concrete of the studied cements.
Figure 15 .
Figure 15.Graph presenting the density values of fresh concrete of the studied cements.
Figure 16 .
Figure 16.Semi-wet density of hardened concrete as a function of aging age.
Figure 17 .
Figure 17.Resistance to compression at different days of maturation.
Table 2 .
Consistencies of the cements studied.
|
2019-04-27T13:13:14.348Z
|
2019-02-20T00:00:00.000
|
{
"year": 2019,
"sha1": "9d87b9b3ac0dfc8f183d1f83eceec38c55c86c1d",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=90694",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "faac26bf2c539b9481cdad0ed49adcb5e70dca0b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
221495598
|
pes2o/s2orc
|
v3-fos-license
|
Growth, carcass traits, immunity and oxidative status of broilers exposed to continuous or intermittent lighting programs
Objective An experiment was conducted to investigate the continuous and intermittent lighting program effects on terms of the productive performance, carcass traits, blood biochemical parameters, innate immune and oxidative status in broiler chicks. Methods A total of 600 Cobb-500 one day old chicks were randomly allocated into six equal groups (100 chicks per treated group with five replicates of 20 chicks each) based on lighting program; 22 continuous lighting (22 C), 11 h lighting+1 darkness twice daily (11 L/1 D), 20 h continuous lighting (20 C), 5 h lighting+1 darkness four times daily (5 L/1 D), 18 h continuous lighting (18 C) and the final group subjected for 3 h lighting+1 h darkness six times daily (3 L/1 D). The experimental period lasted 42 days. Results Compared with those under the intermittent light program, broiler chicks exposed to continuous lighting for 22 h had significant improvement in live body weight and carcass (dressing and breast percentage) measured traits. Though reducing lighting hours significantly reduced feed intake and feed conversion ratio values. Different lighting programs revealed no significant effect on all blood biochemical parameters. Oxidative stress and innate immunity parameters significantly enhance by reducing lighting hours (3L/1D). Conclusion The findings suggest that reducing lighting hours up to 3L/1D would be more useful in enhancing feed efficiency, innate immunity, and oxidative status compared with continuous lighting programs on broilers.
INTRODUCTION
Globally, demand for producing high-quality broilers with lower costs has increased the emphasis on enhancing bird health and immunity to optimum performance. Rapidly growing broiler chickens, especially in early stages, are associated with several health problems such as sudden death syndrome (SDS), ascites and skeletal abnormalities [1]. Duration of lighting plays a significant role in the health of broilers [2]. Several studies have investigated the negative effects of low photoperiod regime and light intensity on broiler's carcass traits, meat quality, skeletal disorders, as well as innate immunity and health [3,4]. Several studies were performed [5,6] confirming some health problems associated with fast growth reduced with extended daily dark periods (6L/18D) in early aged broilers (from 3 till 14 days old). Likewise, Scott [1] revealed enhanced growth occurred with increasing day length (23 h) for 14 up to 42 days broiler chickens. Therefore, using continuous lighting (CL) programs for 23 and 22 h results in maximal growth performance as chicks will consume more feed but indeed it will also result in an increased incidence of metabolic diseases and other high-performance problems [7,8].
Broiler producers extensively use an intermittent lighting (IL) programs on broiler farms [9]. According to Buyse et al [8] findings, the IL programs have a promising effect on feed conversion. These findings may be due to reducing bird maintenance, fat deposition and activity. Besides, Apeldoorn et al [9] revealed that reduced feed intake while maintaining growth rate were correlated with enhancement feed conversion under IL programs. The higher metabolizable and growth energy utilization ratio, as well as a lower activity heat production, was correlated with lower feed efficiency. Many researchers studied the impact of using IL for the same hours of continuous or less hours on performance and revealed improved in the final body weight (BW) linked with reduced feed conversion ratio (FCR) values and enhancing health status. The reduction of activity during the hours of darkness may result in lower heat production, higher feed efficiency or both [10]. Also, lowering ascites and leg problems are correlated with applied IL programs in broiler farms [11]. Worldwide, the Cobb 500 broiler is a recent commercially available broiler strain. This strain has made an impression on the commercial poultry market and has a unique growth rate and yielding ability characteristics compared to other available strains [12]. Therefore, few studies had been performed to evaluate the effect of different lighting programs to obtain the best growth performance by reducing the drawbacks of high growth rate, feed consumption (FC) and considering maintaining the immune system in maximum growth on broilers [13].
Therefore, the propose of the current study was not only to compare the continuous and IL programs but also to evaluate their effect on growth performance, carcass traits, blood parameters, oxidative stress and innate immunity in cobb broiler.
MATERIALS AND METHODS
All experimental procedures were followed and approved by the Ethics Committee of the Local Experimental Animal Care Committee, Department of Animal Husbandry and Animal Wealth Development, Faculty of Veterinary Medicine, Damanhour University.
Experimental design, managements and feeding regime
The present study was carried out at the Research Poultry Farm, Faculty of Veterinary Medicine, Damanhour University, Egypt. A total number of 600 one-day-old of both sexes Cobb-500 broiler chicks were purchased from El-Watania Hatcheries, km 59, Alexandria -Cairo Desert Road, Alexandria, Egypt. Chicks were individually weighed to make uniform replicate groups. The chicks were distributed in a completely randomized design into six equal experimental groups (n = 100) with five replicates (20×5). The six groups were as follows: T1, birds subjected to 22 continuous lightings (CL22); T2, birds subjected to 11 h lighting and 1 h darkness (11 L/1 D) twice daily (IL22); T3, birds subjected to 20 h continuous lighting (CL20); T4, birds subjected to 5 h lighting and one h. darkness (5 L/1 D) 4 times daily (IL20); T5, birds subjected to 18 h continuous lighting (CL18); and T6, birds subjected to 3 h lighting and 1 h darkness (3 L/1 D) six times daily (IL18).
The facility was equipped with supplementary pan feeders and drinkers for the brooding period and then birds were moved to the cages. The birds had free access to feed and water for ad libitum consumption. For the first three weeks, chicks were fed on starter ration (2,900 kcal ME/kg, 23% crude protein [CP] from 0 to 21 days) and followed by grower ration (3,200 kcal ME/kg, 21.5% CP from 21 to 42 days) for the remaining period of the experiment as presented in Table 1. Diets were manufactured by the El-Fagr company for the feed industry (Al Nubarya, El Bohira, Egypt).
Management and vaccination
Chicks were brooded in floor pens (covered with wood shaving litter) during the first two weeks with using supplemental feeders and drinkers and then moved to cages, each replicate housed in 2 separate pens (each pen 100 cm length×90 cm width×45 cm height). The cages provided with separate lines of nipple drinkers for each pen and separate feeders. Chicks were raised under identical hygienic, managerial, and environmental conditions throughout the study conforming to manual guide recommendation for the strain. Briefly, the temperature was set constant 33°C at the birds' level during first three days of age then gradually reduced with 1°C per two days until 25°C at 21 days and then remained at 25°C up to the end of the rearing period. The lighting schedule was applied, as mentioned before, from hatch to the end of the rearing period (42 days). The intensity of measured light in the middle of the room was ranged between 5 and 10 lux. Routine vaccination schedule was administered and necessary medication when needed based on diagnoses and symptoms shown by the birds.
Growth performance
During the experimental period, all birds were subjected to the same method of data collection. Chicks in each replicate were individually weighed at weekly intervals with electric balance until 6 weeks of age. Weighing of the birds was done every week in the early morning before receiving any feed or water. Also, weekly FC, total feed consumption (TFC), and dead birds (if any) were recorded. Growth traits include the determination of BW, body weight gain (BWG), and FCR, mortality (%) and livability (%). After calculation of livability % and FCR performance index of European Production Efficiency Factors (EPEF) were used to evaluate the growing performance index of broilers as suggested by Aviagen [14]. The EPEF were calculated according to the following formula: (
Carcass traits
At the end of the trial (42 days), 20 birds (10 males and 10 females) from each group were randomly chosen around the average weight of the group to determine the carcass characteristics and internal organs. Birds were slaughtered with a knife (Halal Method), allowed to bleed for 150 s, scalded at 60°C×90 s, de-feathered and manually eviscerated. Following evisceration, all carcasses were chilled in cold water for 15 min. Hot carcass, breast, thigh, shoulder, left filet, liver, heart, gizzard, intestine, and abdominal fat were weighed. The blood, viscera, lungs, limbs, head, and neck were termed as the offal's and they were discarded. The abdominal fats in the pelvic and abdominal cavity were collected separately from the carcass and weighted. The technological division of the carcass was performed and calculated according to Xie et al [15]. Breast, thigh, shoulder, left filet, liver, heart, gizzard, intestine, and abdominal fat were expressed as a percentage of the carcass weight. Dressing percentage, after weighing warm carcass, was calculated according to Raza et al [16].
Blood biochemical parameters
The blood samples (5 samples from each replicate) were collected from the wing vein at 42nd day of the experiment. Blood tubes were placed at a slant position at room temperature for 30 min and then separated through centrifugation at 3,000 rpm×15 min. The separated serum samples were collected, frozen and stored at -20°C until subsequent analysis of blood biochemical parameters. Total protein and Albumin were determined using the Bio-diagnostic colorimetric kit according to the method of Apanius et al [17]. The values of globulin were calculated by subtracting the albumin values from total protein values. Serum total lipids were determined by the Total lipids kit of Bio-diagnostic according to the method of Zollner and Kirsch [18]. Serum Triacylglycerol was determined by the Triacylglycerol kit of Bio-diagnostic according to the method of Fossati and Prencipe [19]. Cholesterol was determined using the Allain et al [20] enzymatic method. Determination of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were determined colorimetry by ALT and AST kit of Bio-diagnostic according to the method of Reitman and Frankel [21]. Creatinine was colorimetric dynamic determined as described by Bartles et al [22]. While the determination of urea level was according to the Fawcett and Scott [23] method.
Assessment of antioxidative response
The malondialdehyde (MDA) concentration concentrations in homogenates were evaluated by the method of Jo and Ahn [24]. Serum blood activity of glutathione peroxidase (GPx) was assessed by using ELISA Kit: Catalog No. DPOD-100; Quanti Chrom TM, BioAssay Systems, Hayward, CA, USA), according to Kokkinakis and Brooks [25]. Super oxide dismutase (SOD) activity was measured by the xanthine oxidase method (ELISA Kit: Catalog No. 706002; Cayman Chemical Company, Ann Arbor, MI, USA), which monitors the inhibition of nitro blue tetrazolium reduction by the sample Sun et al [26].
Innate immunity measurements
Blood samples were collected at 42 days of age (5 samples per replicate) as mentioned above. Phagocytic activity and phagocytic index (PI) were determined as described by Kawahara et al [27]. Briefly, 15 μg Candida albicans culture was added to 1 mL of citrated blood from each group and incubated in the water bath at 25°C for 5 h, and then blood smears from each tube were stained with Giemsa stain. Phagocytosis was estimated by determining the proportion of macrophages which contained intracellular yeast cells in a random count of 300 macrophages and expressed as percentage of phagocytic activity (PA). The number of phagocytized organisms was counted in the phagocytic cells and called PI.
Statistical analysis
Data obtained were analyzed by a one-way analysis of variance using Statistical Analysis System (SAS,v9) [28]. The main effect of the lighting program was the experimental unit. Means were compared by Duncan's multiple range test [29] when a significant difference was detected. The following Proc general linear model model was used for the analysis of variance: X ijkl = μ+A j +e i ; where: X ij = observational data, μ = overall mean, A j = effect of lighting program, e i = random error.
Growth performance
Results of the lighting program's effects on growth performance parameters (LBW [live body weight], BWG, FC, and FCR) are presented in Table 2. Results show that significant differences were found in all parameters studied among all groups. Broilers final body weight was significantly (p<0.01) affected by different lighting programs. Cleary, broiler group, subjected to 22 h continuous light (CL22) exhibited higher final LBW followed in order by groups which reared at 11L/1D, 20 h continuous, 5L/1D, 18 h continuous or 3L/1D light programs (IL22, CL20, IL20, CL18, and IL18). Final LBW at 6th week of age was reduced by about 5.44%, 6.79%, 8.48%, 9.14%, and 11.27% for T2, T3, T4, T5, and T6, respectively in comparison with T1 group (CL22). Equally important, the obtained BW results from chickens reared under long photoperiod 22 h. were higher than the groups reared at 20 h and 18 h, whether it is continuous or intermittent (CL or IL). Not only LBW but also BWG took the same trend where 22 h continuous light broiler group (CL22) obtained higher BWG at 6th weeks of age followed in order by groups that reared on lighting programs at IL22, CL20, IL20, CL18, and IL18, respectively.
Furthermore, regarding feed utilization, different lighting programs significantly (p<0.01) affect the TFC of all groups. It was found that broilers reared in CL programs (CL22, CL20, and CL18) consumed significantly more feed than their comparable exposed to IL programs (IL22, IL20, and IL18), respectively. At the same time, broiler groups exposed to long photoperiod for 22 h (T1 and T2) whether CL or IL consumed significantly more feed than those in medium photoperiod for 20 h (T3 and T4) or short photoperiod for 18 h (T5 or T6). Equally essential results showed that lighting programs significantly (p<0.05) affect the total FCR of all broiler groups. It was found that chickens raised in most prolonged light intervals T1 and T2 (for 22 h whether CL or IL) recorded significantly higher and lowest total FCR (1.58) values than those in medium (1.65 and 1.54 for T3 and T4) or short photoperiod for 18 h (1.55 and 1.54 for T5 or T6), respectively. Besides, broilers reared in IL programs (IL22, IL20, and IL18) recorded better FCR than others exposed to CL programs (CL22, CL20, and CL18), respectively.
Mortality rate and European Production Efficiency
Regarding the mortality rate, the results presented in Table 2 show significant differences (p<0.01) among the experimental groups as affected by different lighting programs. Cleary, broiler group, subjected to 22 h continuous light (CL22) recorded higher mortality % followed in order by groups which reared at IL22, CL20, IL20, CL18, and IL18. Equally important, broiler groups reared under a continu-
Carcass characteristics
Results of dressing %, carcass traits and internal organs as affected by lighting programs are presented in Table 3. Results showed that, lighting program significantly (p<0.05) affected dressing %, breast muscle, liver, heart, intestine and abdominal fat % of broiler chicks groups. Cleary, broiler group subjected to 22 h continuous light (CL22) manifested higher dressing %, breast muscle, liver, intestine and abdominal fat % as compared by all other groups. Alternatively, non-significant differences (p<0.05) were found in the thigh, shoulder, left filet, gizzard and spleen % between all groups of broilers as affected by different lighting programs intervals (22,20, or 20 h) whether it was continuous or intermittent (CL or IL).
Blood biochemical parameters
Results of the lighting program's effects on blood biochemical, blood kidney and liver function parameters are presented in Table 4. Results demonstrated non-significant effects (p< 0.05) in broiler checks blood biochemical parameters (albumin, globulin, albumin/globulin ratio, total protein, total lipids, triglycerides, and cholesterol) under different lighting programs (continuous or intermittent). Nevertheless, there were some numerical differences found among different groups whereas some blood parameters (albumin, total lipids, and cholesterol) tended to be higher in 22 h continuous light group (CL22) as compared by other groups. Also, the results clearly showed that the lighting program had no significant effect (p<0.05) on blood kidney (urea, creatinine and uric acid) and liver functions (ALT and AST) parameters of broilers as affected by different lighting programs (photoperiods whether CL or IL).
Antioxidant and innate immunity response
Results presented in Table 5 show the oxidative stress parameters (MDA, GPx, and SOD) of broiler chicks at 42 days of age as affected by different lighting programs. Data demonstrated that different lighting programs (continuous or intermittent) significantly (p<0.01) affect oxidative stress parameters of different broiler groups. Oxidative stress parameters (MDA, GPx, and SOD) tended to be higher in broiler group, which were raised under continuous photoperiod for 22, 20, or 10 h (CL22, CL20, and CL18) than in other groups exposed to intermittent light programs (IL22, IL20, and IL18) for the same intervals. Meanwhile, decreasing photoperiod from 22 h to 20 or 18 h (whether CL or IL) significantly reduced oxidative stress parameters in broiler groups (T2, T4, and T6). Additionally, regarding the PI and PA data presented in Table 5 showed that the lighting program (whether continuous or intermittent) significantly (p<0.01) affects the PI and PA of different broiler groups. The CL program decreased PI and PA in CL22, CL20, and CL18 groups in comparison with groups (IL22, IL20, and IL18) which exposed to the same photoperiod (22, 20, or 18 h), but in IL regimes. In addition to this, it should be noted that intermittent short photoperiod (IL18) significantly increased PI and PA values compared to other groups exposed to different lighting programs.
Growth performance
When the birds' environment is taken into consideration, lighting program is one of the most critical of all environmental factors affecting performance. The lighting program affects the birds' metabolism, which in turn is responsible for maximizing growth performance and maintaining normal physiological processes and functions. The results of the present study indicated that broiler performance in terms of BW and BWG was significantly affected by differences in lighting intervals (continuous or intermittent). Reducing lighting interval (from 22 h to 20 h or 18 h) and using intermittent light programs (IL) caused a significant reduction in BW and BWG, especially at older ages of the broiler. This reduction in BW and BWG could be explained by the reduction in time of feeding which causes a decrease in feed consumption by birds in the shortest light intervals where there was a highly positive correlation between BW and FC. The present results are in agreement with Schwean-Lardner et al [30] who comparing different lighting schedules and demonstrated clearly that longer periods of darkness prevent regular access to feed and consequently reduce feed intake and limit growth. In the present study, the BW and BWG were significantly higher in the CL group as compared to the other lighting groups during the final stages. Also, Yang et al [31] reported that intermittent light caused a reduction in feed intake in birds and resulted in superior broiler weights in CL. However, Ingram et al [32] found that broilers reared under CL gained more weight as compared to the other exposed to intermittent or restricted light. Furthermore, Olanrewaju et al [33] reported, broiler chicks reared under the short/nonintermittent photoperiod had a significant reduction in BW and BWG from 14 to 42 of age as compared with birds reared under the long/continuous and regular/intermittent photoperiods, respectively. In fact, broiler chickens do not feed or drink during a dark period [34]. Thus, for this reason, total FC in our study was reduced by about 5.47%, 8.06%, 11.13%, 11.39%, and 13.11% for T2, T3, T4, T5, and T6, respectively as compared with broiler group in T1 which were exposed to CL for 22 hrs. Another possible reason for feed intake reduction for birds reared under IL may be mainly due to less activity when lights were switched off which associated with secretion of melatonin from the pineal gland as reported by EI-Badry et al [35]. The results of this study in connection with feed consumption are in agreement with those of Classen et al [36] who demonstrated clearly that longer periods of darkness prevent regular access to feed and consequently reduce feed intake. Furthermore, IL has been used to restrict feed intake and improve feed efficiency which could reduce production costs as reported by Farghly and Makled [37].
Duration of lighting is a major factor affecting broiler performance. Several investigations showed that a CL regime is not recommended as an optimal program [37,38]. The improvement in FCR of broilers exposed to shortest photoperiod and IL programs could be attributed to the reduction of physical activity, energy expenditure during dark periods, better digestion of feed, reduced nutrient requirements for maintenance and greater energy availability for growth as found by Rahimi et al [34]. While birds exposed to CL are mostly active that being associated with more stress, causing disturbance to their metabolism, and leading to a lower FCR. Additionally, another reason of the improvement in FCR could be due to lower feed consumed and lower feed waste during the dark phases. Similar results found by Mahmud et al [38] who reported that birds exposed to different IL programs consumed less feed as compared with CL. This reduction in feed consumption of birds under intermittent light treatments might have contributed to better FCR in these birds. The above result results are in full agreement with El-Slamoney et al [39] who observed that the FCR of the chick grown under various intermittent light schemes was significantly better than those grown under CL. Also, significant improvement in FCR has been observed in broilers maintained under the short intermittent program was found by Yang et al [31], who reported that FC and FCR of broilers were significantly affected by different photoperiod.
Mortality and European Production Efficiency
Regarding the mortality rate, the application of graded levels of lighting programs on broilers indicated that extended lighting programs increased mortality, reduced livability and changed the behavior as found by Schwean-Lardner et al [30]. High mortality and low livability rate in groups raised at long continuous photoperiod may be due to the rapid growth rate of broilers which reflect in several problems, such as a high incidence of metabolic diseases (ascites and sudden death syndrome), tibial dyschondroplasia and other skeletal disorders. Moreover, the incidence of cannibalism is another major problem when light is given continuously for long periods. The intermittent light program may help to retreat this problem, as the bird will remain quiet and calm during dark hours of the light regimen. In addition to this, the metabolic disorders in broiler production could be reduced by IL programs.
Similar findings to our results were supported by Farghly and Makled [37]. Moreover, Schwean-Lardner et al [40] in an examination of the impact of graded levels of photoperiod (14,17,20, and 23 h) revealed the total mortality rate due to metabolic and skeletal disease decreased linearly with increasing inclusion of darkness periods. They also suggest that 7 h per day is an appropriate length of darkness for maximizing broiler welfare based on the observed health parameters. Another potential benefit of darkness is the change in bird metabolism that occurs during the dark period and the consequential restoration of tissue [41]. Also, Abbas et al [42] observed that intermittent light regimen reduced the mortality rate by three times compared to the CL regimen.
Carcass characteristics
In regard to carcass traits, lighting programs had an effect on dressing, breast, abdominal fat, liver and intestine % in the present study, especially for the group which was exposed to 22 h (CL22). Increased dressing, breast, abdominal fat, liver and intestine % in CL22 group may be due to the increase in pre-slaughter weight which highly correlated with dressing yield. Significant correlations between pre-slaughter, carcass and breast weights were observed in this study. Similar to the current study, pre-slaughter weight was highly correlated with dressing yield in Cobb 500 and Hubbard broiler strains [43]. Reducing photoperiod (from 22 to 18 h/d) could be used successfully as a tool for decreasing abdominal fat content and improving carcass quality. This result due to less feed intake during dark periods and better efficiency in nutrient utilization.
These results were supported by the data from Rahimi et al [34] who observed a significant reduction in abdominal fat in broilers exposed to intermittent light compared with CL. Also, Oyedeji and Atteh [44] reported that there was a significant reduction in abdominal fat of broilers subjected to short photoperiod or flash program. Moreover, the obtained results are similar with the findings of Farghly and Makled [37], who reported that lighting had minimal effects on the carcass or part yields. However, they found non-significant differences in the percentages of the drumstick, femur and gizzard among all groups under IL, although the differences were significant (p<0.05) in the dressed carcass, breast, liver, and abdominal fat percentages.
Blood biochemical indices
In general, blood biochemical parameters are used as an indicator to detect disorders due to incorrect lighting programs effecting metabolic, nutritional and welfare conditions of broilers. In the present study, there were no changes in biochemical parameters. It might be due to chronically stressed after exposure to different light programs in broiler birds. It is well known that intermittent light decreases physiological stress, improved immunity and bone metabolism [36].
The results of the present study were in full agreement with those obtained by El-Slamoney et al [39] who found that plasma total protein, total lipids, glucose, cholesterol, and triglyceride levels did not differ significantly among different lighting groups. Similarly, Farghly et al [45] found that no significant differences were observed for all blood parameters of flash lighting treated chickens and those of the control. Furthermore, Farghly et al [46] found that there was no change in plasma total protein, total lipids, cholesterol, and liver function enzymes as affected by lighting periods.
Antioxidant status and immunity activity
Parameters of oxidative stress such as SOD, GPx, and T-AOC are indicators for assessing oxidative stress status [15]. Also, MDA is frequently used as a biomarker of oxidative stress [35]. In the present study, lower MDA, GPx, and SOD levels in broiler groups which were raised under intermittent photoperiod (IL) suggested that intermittent light regime may help to maintain the balance of oxidant-antioxidant status and may enhance the ROS scavenging by regulating the concentration of MDA, GPx and SOD. In the same way, decreasing light intervals from 22 h to 20 or 18 h. significantly reduced MDA, GPx, and SOD activity. The beneficial effect of shorter periods and intermittent light regime on MDA, GPx, and SOD may also be attributable to melatonin secretion which mainly produced during a long period of darkness. Abbas et al [42] suggest that CL is more stressful than an intermittent regime and stress generally destroys the balance of oxidant-antioxidant. These results were partially in line with previous reports indicated that CL exposure caused a significant reduction in both serum melatonin concentration and antioxidant levels in chickens [9,39] while, broiler chickens reared under IL program had better antioxidant status [46]. Additionally, melatonin decreased the homocysteine level in brain homogenates and therefore, may have a role in protecting cells from oxidative damage [47]. Also, EI-Badry et al [35] found that intermittent light decreased MDA compared to the CL group.
Phagocytic cells (non-lymphoid cells), including macrophages, neutrophils and heterophils play a crucial role in immunity which digests foreign materials, such as bacteria, viruses, denatured proteins and lipids, and apoptotic cells [27]. In this study, higher values of phagocytic cells in broiler groups which were raised under intermittent light regime, suggested that IL increased blood concentration of phagocytes. It may be due to the pineal gland that seems to control the secretion of melatonin hormone, the diurnal rhythm of non-specific immunity in chickens and then increase serum PI and PA [35]. In the same context Cardinali et al [48] reported that melatonin stimulates the production of progenitor cells for granulocytes and macrophages. It also stimulates the production of natural killer cells, CD 4+ cells and T lymphocytes cells which enhanced by melatonin levels.
Another possible reason is that long photoperiod and CL programs decrease the opportunity for rest and sleep, thereby increasing fear reaction and physiological stress. So, birds exposed to long CL regimes have a decrease in PA and PI due to higher plasma concentrations of corticosterone, indicating the high level of physiological stress and foreign bodies which consequently cause inflammation, ultimately resulting incidence of metabolic disorders and increase MR % in the broiler. Therefore, groups reared under short intermittent light programs have lower physiological stress, improved specific and non-specific immune responses which emphasizes the importance of dark periods during the production cycle of broiler chickens [37]. The results of the present study were in line with results obtained by EI-Badry et al [35] who found that intermittent light significantly increased blood concentration of lysozyme, total white blood cells count and lymphocyte % compared to a CL group [42].
In conclusion, the results from our study revealed that, although conventional CL programs for 22 h provide an enhancement in final BW, BWG, and dressing % of broilers it still had adverse effects on FCR values, innate immunity, oxidative status, and livability ratio. In conclusion, these results suggest that reducing lighting hours up to 3L/1D seems to be more beneficial in enhancing feed efficiency, livability, antioxidant activity and immune responses as an alternative to the conventional CL programs. Finally, it is believed that a further study is needed to evaluate the effects of different photoperiods and IL programs on economic consideration in a commercial production setting.
CONFLICT OF INTEREST
We certify that there is no conflict of interest with any financial organization regarding the material discussed in the manuscript.
|
2020-08-27T09:01:54.150Z
|
2020-08-21T00:00:00.000
|
{
"year": 2020,
"sha1": "ebe405c5c764d441229cfbb827811d9091df5c92",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/ajas-20-0328.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65617d51004dcfb46f4aec8845479b4e0cff48fc",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
53480471
|
pes2o/s2orc
|
v3-fos-license
|
Perceptions on Authenticity in Chat Bots
In 1950, Alan Turing proposed his concept of universal machines, emphasizing their abilities to learn, think, and behave in a human-like manner. Today, the existence of intelligent agents imitating human characteristics is more relevant than ever. They have expanded to numerous aspects of daily life. Yet, while they are often seen as work simplifiers, their interactions usually lack social competence. In particular, they miss what one may call authenticity. In the study presented in this paper, we explore how characteristics of social intelligence may enhance future agent implementations. Interviews and an open question survey with experts from different fields have led to a shared understanding of what it would take to make intelligent virtual agents, in particular messaging agents (i.e., chat bots), more authentic. Results suggest that showcasing a transparent purpose, learning from experience, anthropomorphizing, human-like conversational behavior, and coherence, are guiding characteristics for agent authenticity and should consequently allow for and support a better coexistence of artificial intelligence technology with its respective users.
Introduction
Social media, digital messaging and other, comparable substitutes for human interaction, increasingly change the way we behave in social as well as in cultural manners (http: //bits.blogs.nytimes.com/2015/11/04/in-2016-digital-transformation-goes-mainstream-idc-predicts).Prevailing technological trends often act as an activator for such shifts in human behavior.For example, following the move from (locally installed) desktop applications to software accessed through browsers and websites, the last years may be characterized by a particular increase of mobile, i.e., on the go, content consumption, triggered by the sustained propagation of (smart)phones and other portable media consumption devices [1].This shift is based inter alia on the conclusion that users apply social rules to their interaction with systems, even though they know that such may be unsuitable [2].
Progressing from here, the next, rather significant paradigm shift is seen in the emergence of intelligent virtual agents, in particular messaging agents or (chat) bots.Essentially, these agents can be defined as computer programs, which are capable of reading and writing messages autonomously, similar to how we humans perform this task [3].Referring to the potential of intelligent agent technology, Beerud Sheth, CEO and co-founder of Teamchat, a San Francisco-based start-up specialized in smart-messaging APIs, highlights that "just as websites replaced client applications then, messaging bots will replace mobile apps now".So if "messaging is the new platform", then "bots are the new apps" (https:// techcrunch.com/2015/09/29/forget-apps-now-the-bots-take-over/).Also venture capitalist Benedict Evans predicts this important change towards mobile messaging stating that "old means that all software expands until it includes messaging; new means that all messaging expands until it includes software" (http://ben-evans.com/benedictevans/2015/3/24/the-state-of-messaging).Consequently, it appears that these types of intelligent messaging agents are at the edge of becoming the new means of communication for organizations and privates alike.Yet, although the technological progress of building agent technology has been significant in recent years, insights on how to make these bots socially accepted have barely scratched the surface.Reasons for this lack of knowledge might be found in the multidisciplinarity of the topic.That is, understanding the characteristics of what makes a messaging service intelligent seems not so much a technical but rather a social, ethical or even philosophical problem to solve -one that may need to begin with a more general understanding of (human) intelligence and a subsequent definition of the type of intelligence that is expected from an (artificial) agent.
From the Brain to the Mind
In order to start a discussion about what it is that makes interactions intelligent, we first need to reflect upon what makes us humans interact and communicate in a way that deems adequate to a given situation.Such a reflection might start with a rather simplistic definition of the functioning of the human brain and how it is connected to the human mind.Kurzweil argues that the brain can be understood as being a hierarchical pattern recognizer containing approximately thirty billion neurons situated in the human neocortex [4].Patterns can be learned, predicted, recognized and implemented into other patterns.Since processing, however, happens simultaneously and relentlessly within the brain, there is neither a beginning nor an end in this processing chain.According to Kurzweil, patterns are tripartite-composed of an input, a name for the activation of the pattern, and connections to both higher-and lower-level patterns.He further argues that the human pattern recognition module performs probability estimations of input, its size, and the importance of its parameters, so as to determine the likelihood of a correct representation in the mind.Even though patterns are multidimensional, the hierarchical structure of the neocortex allows the assumption that recognition is based on one-dimensional pattern inputs, such as lists of data [5].
With respect to the human mind, it is particularly the capability of learning which seems essential to the human brain.As Minsky puts it: "The principal activities of brains are making changes in themselves" [6].The human neocortex is training itself and learning new patterns continually in order to make sense of the input information.The learning process includes building connections among patterns as well as strengthening existing connections if patterns are triggered simultaneously [7]-a mechanism which has already been adopted by modern machine learning approaches based on artificial neural networks [4].
Learning, i.e., the storing of information, can be based on non-direct as well as direct thinking.The non-direct process stores patterns as "circuitous sequences of associations" whereas direct thinking uses lists and sub-lists [4].Particularly during direct thinking processes, the mind is confronted with a cultural or ideological codex guiding thoughts.Kurzweil emphasis this set of rules by highlighting that "many of these taboos are worthwhile, as they enforce social order and consolidate progress".Thinking may thus be summarized as being a process that uses connected patterns and clusters of patterns as well as narratives and stories to educate and train the mind to achieve a given goal.This short, and admittedly rather simplistic, description of the principle processes performed by the human brain may serve as relevant foundation for creating a better understanding of an even more difficult human characteristic, i.e., (human) intelligence.
Intelligence and Its Link to being Human-Like
Referring to Salovey and Mayer one may find a number of definitions and interpretations of human intelligence and how they evolved throughout history [8].One of the first definitions by Descartes supposed that intelligence is "the ability to judge true from false" [9].Later, Wechsler's understanding of intelligence was based on "the aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal efficiently with his environment" [10].Although this definition already included the distinction between mechanical, abstract as well as social intelligence, additional dimensions were still missing.In 1983 it was finally Howard Gardner who challenged the then predominate definition of intelligence and how such may be tested, by arguing that intelligence does not root in one single trait (i.e., there is not always one single answer to a given question) but it is better explained by a model of multiple intelligences [11].This multitude of intelligences was subsequently further refined and enhanced by Albrecht who re-framed it as the six multiple smarts [12] composed of: Social Intelligence (i.e., dealing with people, or as Albrecht puts it "the ability to get along with others and get them to cooperate with you" [12]) With these six dimensions of intelligence in place, Albrecht argued that their combination and its resulting synergies form the portrait of a true "Renaissance Person"-an appearance which needs to be resembled when one aims at building an agent whose goal is to be perceived human-like.
Intelligent Agents
Alan Turing's famous imitation game poses the question: "Can machines think?" [13].In 2015, Mikolov and colleagues stated, that the time for intelligent machines is here.Sufficient computational power and immense amounts of data, complemented by complex machine-learning methods allow for the creation of sophisticated general-purpose intelligent systems, whose focus is on smart communication and learning [14].Similarly, Tecuci stated that the purpose of today's artificial intelligence is to create intelligent agents, which are capable of achieving goals by knowing their environment and the stakeholders they have to deal with, as well as memorizing the gained information and improving their behavior through learning [15].Or as Lieberman put it already 20 years ago: "it follows that there must be some part of the interface that the agent must operate in an autonomous fashion.The user must be able to directly observe autonomous actions of the agent and the agent must be able to observe actions taken autonomously by the user in the interface" [16].
Taking these basic rules as a guideline, we currently see intelligent interface agents starting to inhabit social media ecosystems.They are already capable of autonomously producing content as well as taking part in basic interactions on various social media platforms [17].They aggregate content from different sources and automatically respond to inquiries for brands and companies in customer care settings.The literature on social acting agents has also grown dramatically, primarily driven by advances in technology.One of the most notably systems with regard to the representation of social intelligence is Rea, a real estate agent in a virtual world, that people can query about buying properties.The system uses intonation, gaze direction, gesture and facial expression to support the conversation [18].
Yet in doing so, these agent technologies may cause a number of new challenges.For example, social botnets have the ability to expose private data through exploiting different vulnerabilities found with social media users.They may post, comment, moderate and share content [19], thereby potentially affecting humans' perceptions of reality [20,21].This type of malicious behavior also adds to the increasingly negative connotation to artificial intelligence (AI) technology and thus highlights that, if we want for these intelligent agents to be trusted, we need to guarantee the trustworthiness of their autonomous doings.In other words, we do not need intelligent, but rather socially intelligent agents.
Socially Intelligent Agents
As described above, humans already do interact with agent technology both at work as well as in their private lives.During these interactions, the software agent usually tries to mimic human traits, although these efforts often remain unnoticed by the user [17].To this end the Turing test still counts as the standard measure for human-like communication behavior of intelligent systems, as it not only requires intelligence to appear human-like, but also asks for additional social capabilities to be present [13].As Persson and colleagues put it: "The ultimate purpose with socially intelligent agents is not to simulate social intelligence per se, but to let an agent give an impression of social intelligence" [22].
Adding to this discussion, Albrecht [12] defined a number of key aspects for social intelligence.Those include clarity, situational awareness, empathy, presence, and authenticity.For him, situational awareness is described as the ability to understand the situation and its circumstances, creating some sort of social radar which is used to detect happenings in any given situation.Studies exploring situational and context awareness of intelligent agents further found that, "context awareness allows applications to adapt themselves to their computing environment in order to better suit the needs of the user" [23].Persson et al. also highlight that socially intelligent technology needs to understand its own presence and the context around itself, so that it can take humans as well as other agents into consideration.To achieve this, they took advantage of so-called primitive psychology and life-preserving aspects such as needs, desires, sensations and pain [22].
Contributing to the development of social intelligent agents, researchers also aimed at a better understanding of user-agent relationships and their influence on the interaction.Coon, for example, explored different relationship distances, ranging from stranger to companion, optimizing the frequency and level of engagement [24].Similarly, Bickmore and Schulman focused on different user-agent relationship distances to support health counseling at various intimacy levels [25].Finally, trust and trust building seems to play a crucial role when developing agents which should exhibit some sort of social intelligent behavior.To this end relevant previous research includes Gratch's work on virtual rapport, which shows that non-conscious behavioral feedback, such as mimicry and backchanneling, increases trust and consequently user-agent engagement in conversations [26], as well as Matsuyama and colleagues' work sowing that relationships may also be tighten through a better analysis of a user's verbal and non-verbal cues, creating some sort of agent awareness [27].
While from the above we can see that previous research has investigated some of the aspects contributing to social intelligence (i.e., situational awareness, empathy, presence), the concept of agent authenticity seems so far been left out.
Authenticity of Agents
Agent authenticity was already a topic in the 1960s, when Joseph Weizenbaum first introduced his ELIZA [28].While, in principle, ELIZA was not much more than an interactive diary, it was one of the first computer enabled technologies able to build up some sort of human-technology relationship by showing interest.Later, scientists coined the term Darwinian buttons to describe relevant social actions such as tracking an individual's movement, making eye contact, or gesturing kindly in acknowledgement of another persons' presence [29,30].Turkle emphasizes that, since the advent of computers people have been searching for criteria of authentic relationships, although today, computer companionship seems already normal.Even more so, she argues that "as robots become a part of everyday life, it is important that these differences are clearly articulated and discussed".Here authenticity plays an important role.Yet, although research agrees on its importance [31], empirically tested measurement methods are so far missing [32,33].
Authenticity defines a person's honesty and sincerity [12].It is about establishing cooperation, preventing manipulation and being true to oneself and others.Moreover, having respect, staying true to one's values and playing fair makes a person (and potentially also a system) authentic.Albrecht describes it with the German expression of being a "Mensch" (i.e., being a human) [12].Derived from the Latin word authenticus and the Greek word authentikos being authentic further means to be trust-worthy, authoritative, and acceptable [34]-characteristics which are not only important in AI but also in general consumer behavior [35].For example, Pine and Gilmore deem authenticity the new consumer sensibility [36], making it an important topic in marketing [37], tourism, leadership as well as in education, where it already led to the development of several relevant research models and frameworks [38].In software development, however, and here particularly in the development of autonomous agents, authenticity considerations as such are often missing.Johnson and Noorman emphasis this lack by saying that "responsibility issues should be addressed when artificial agent technologies are in the early stages of development" [39].
Although the overall concept of agent authenticity seems to be neglected by engineers there has been work on single traits influencing authentic agent behavior.That is, researchers have evaluated the effect of transparency on agents' argumentation capabilities [40] and decision making [41,42].Also, conversational agents' ability to anthropomorphize was subject to several studies (e.g., [43,44]), as well as their conversational behavior (e.g., [27,45]).Finally, engineers have been working on making agents learn from experience allowing them to build up some sort of contextual and (to some extend) social awareness, and consequently act autonomously and coherent (e.g., [46][47][48][49][50][51]). While all these research efforts focused on single characteristics of intelligent, potentially authentic agent behavior, they did not aim at understanding agent authenticity as a holistic concept and whether such may be compared or inspired by human authenticity.
In the past, people drew hard lines when talking about machines being cognitive [30].Today, however, computer culture accepts affective computing and sociable machines as well as flesh-machine hybrids, as long as they do not come too close i.e., as long as they do not make us feel uncomfortable [52][53][54].Being authentic is just one more piece of the puzzle.In order to become authentic, however, it is necessary to anthropomorphize both in the role as a single entity as well as in being an enterprise representative, i.e., when acting in the name of a legal institution [36].Only when this type of anthropomorphic behavior is achieved will the perception of brand and company identities be supported.In other words, conversations within everyday life are the most powerful form of (consumer) seduction, for which inauthentic conversational behavior creates a perception of phoniness.
Gundlach and Neville [55], Grayson and Martinec [56] as well as Beverland et al. [57] developed frameworks for the positive perception of authenticity.Yet, authenticity is deemed a social concept and thus for agents to advance they would need to be equipped with internal representations of relevant social information [58].This may be achieved through social networks enhanced by individual beliefs, goals and intentions of an intelligent system.Agent-based social modeling has to specify this model, make basic assumptions, create interrelations and rules, and build models and research designs for relevant tests, simulations and experiments.With this type of social computing we should then be able to develop agents that are eventually capable of acting (agent) social, rather than trying to imitate humans.One question that remains, however, concerns the type of social characteristics which would lead to the initial break through.In other words, what are the critical traits of authenticity we should start with?
Materials and Methods
In order to better understand the authenticity demands for intelligent agents we talked to a number of experts in artificial intelligence and socio-ethics.The goal was to verify and further explore the multi-characteristic construct of authenticity and its implications for the domain of autonomous, conversational agents.Interviews were conducted according to McCracken's qualitative long interview method [59].They were open-ended, following past studies, and focused on the understanding and conceptualization of domain-specific authenticity dimensions [60,61].
Potential interview partners were selected among the authors of relevant recent literature, focusing on the philosophic and socio-ethic field of authenticity as well as on artificial intelligence.Inclusion criteria were based on their contributions to the field, with their latest contribution dating back no longer than 3 years, as well as their respective impact expressed by citations according to Google Scholar.Starting with an initial sample of 35 authors we used a snowball sampling method to extend our reach.Eventually a total of 68 potential interviewees were approached via email of whom 12 agreed to participate in the study (9 male and 3 female).All experts either already obtained a Ph.D. in their respective fields or were in the final process of doing so.Also, all of them indicated previous experiences, knowledge about, and interactions with messaging services as well as intelligent systems.Table 1 lists our interview participants, their respective areas of expertise, the year of their last contribution to the field, as well as the number of citations which Google Scholar connected to them at the time of the interview.All interviewees gave their informed consent for recording and inclusion of their input before they participated in the study.The study was conducted in accordance with MCI's guidelines outlining ethical considerations regarding research with human participation.The fulfillment of the respective protocol was approved by the MCI Research Ethics group.
Results
Interviews lasted between 15 and 45 min, were audio recorded, fully transcribed (note: for reasons of readability transcriptions were denaturalized) and subsequently analyzed applying McCracken's long interview method.Results point to five relevant agent authenticity traits which are further discussed below.
Be Transparent
"Companies are building up their chat bots for who-knows-what-reason.Obviously, they have their own purpose and so there are probably differences in authenticity based on the application of the bot." (P11) In 60 statements, data show that authentic messaging agents should showcase a transparent purpose.The design purpose reaches from providing specific functions to assisting users, having a belief model, being predictable and allowing users to create confidence in the decisions of the agent.Various statements showed that some of these factors are already incorporated in today's systems.The characteristic of having a transparent purpose also includes the capability of acting transparently and having an intention, highlighted by P07 as "being able to provide information on how the bot reached a certain conclusion".This means that in order to offer transparency, an agent has to provide legitimacy for its actions, supporting a certain type of predictability with respect to decision making and its rule-based value and belief system.All of which prevents so-called black-box behavior.Agents have to show "how they behave in specific situations, how they make decisions.[...]; what we people do is not always predictable, but machines or agents have some rule base, which means they should not behave on their own [...]; they should be programmed so as to correspond to some sort of ethical standard" (P10).This type of predictable behavior also relates to important other aspects such as security and objectivity, where a user should be able to rely on an expected agent behavior.Also, our experts see the need for an agent to have some type of internal sortation, i.e., a defined attitude.This internal order should reflect its creator/owner.Such provides relevant information about the agent's purpose.Ultimately, the agent is perceived as a "machine that is representing a specific organization" (P11).Thus, "if an agent doesn't tell you who its creators or owners are, and what its purpose is, it creates an inconclusive context of interaction; consequently, people may easily feel uncomfortable" (P07).
Table 2 lists the number of statements related to being transparent for each interviewee.Interview participants were split into those coming from the machine-learning/AI domain and those coming from the socio-ethics domain.While given the numbers it does seem that transparency is a rather socio-ethical topic, a comparison between groups did not show a significant difference (t = −2.25,p = 0.06).
Learn from Experience
"Basically, it has to learn something, but since it has to learn, it may also make mistakes.Mistakes like children do-or not only children.Although, for some reason you don't want the artificial intelligence to make mistakes because it is supposed to be sort of perfect."(P08) A total of 59 unique ideas give a comprehensive insight into how the learning from experience characteristic creates authenticity in messaging agents.As showcased by the quote above, the learning process follows a path very similar to human learning.This includes cooperation with other agents as well as humans in order to learn from experience, including trial and error.Thus, assisted by human intervention, authentic messaging agents should learn from experience, patterns, and iterations.While our experts clearly highlight that "deep learning relies on large amounts of data" (P05) they also think that authentic messaging agents should be capable of developing their goals beyond the often-obscure application of rules.This is, the continuous prevention of errors provided by the adherence to strict rules and black-lists (i.e., rules specifying what not to do or not to say), although supporting predictability, is not always beneficial when it comes to fostering authenticity.An authentic agent should rather openly behave within a certain margin of error, which indicates social intelligence.P01 supports this view by stating that "if there is no margin of error, the AI is either a genius or it does not want to show any traces of hesitation, uncertainty or potential fault", which might be perceived deceptive.Yet, while an agent should be transparent about potential misinterpretations and errors, learning from experience also means that it should reflect upon its erroneous actions and as a consequence prevent them in the future.
A second important aspect of authentic learning from experience relates to context-aware learning, where an agent continuously adapts behavioral patterns to what it is able to perceive from its environment.For example, "if it knows from previous behavior and behavioral patterns how you would react and how you would usually use your phone, it would know that after 8 pm calls would normally be only between you and your wife"(P10).Context-aware learning means that it would not only react to this understanding but also make this newly acquired knowledge transparent, avoiding pure black-box behavior.Essentially, it should be able to "explain the workings and rational of its system so that we can judge whether its belief model is well founded" (P04).
Table 3 lists the number of statements related to learning from experience for each interviewee.Again, interview participants were split into those coming from the machine-learning/AI domain and those coming from the socio-ethics domain.A comparison between groups did not show a significant difference regarding the number of statements (t = 0.36, p = 0.73).More than fifty interview statements explain that a messaging agent has to anthropomorphize in order to be authentic.That is, the agent "has to understand human physiology, it has to understand history, and it has to understand culture because it is part of the user as well as the environment to which it is deployed to" (P03).Designing such a character, "main concerns are the consistency of all aspects of life history and emotional reactions to make the chat bot as real as possible" (P06).Based on this understanding, authentic agents should "create a human persona, which you can relate to and model for yourself " (P01).One may even speak of a need for charisma where an authentic agent would be expected to "correct a user's interpretations and provide according responses" (P02).To fully anthropomorphize, authentic agents would also need to take care of individual differences related to users and their environment.This goes along with a certain culture-awareness and knowledge of the differences within a culture.Even more so, they should adapt to the local environment, adopting behavior which reflects the given culture.
Not only in human-human interaction but also in all human-computer interaction, the level of trust is important [18].To this end a second aspect connected to anthropomorphic behavior concerns the existence of trust.That is, in order to "show proper social intelligence an agent needs to build up trust" (P01).One interviewee would even argue that in order to build up a trustful relationship with an agent one requires exposure; i.e., "there has to be something at stake; in a social relationship, something is at stake, and that's what makes it exciting [...]; to most of today's bots you can say you are an idiot and they would not react to that or change their behavior; you could not do that with another person, as you would destroy your relationship" (P04).
Table 4 lists the number of relevant statements per participants.A comparison between groups (i.e., machine learning/AI vs. socio-ethics) did not yield any significant differences (t = 1.06, p = 0.32).Adequate conversational behavior is highlighted by 45 statements as being an important character of an authentic messaging agent.Elements of this behavior reach from having a mission and getting to the point, to not wasting time, filling in blanks, understanding quickly, bringing value to the interaction that goes beyond task-orientation, keeping the conversation moving, aligning expectations, and being surprising.However, conversational behavior also includes conversational awareness and conversational skills.These factors are described by turn-taking in conversations, meta-verbal-cues, the response to conversational signals, joined-attention or the handling of overlap in dialogs.To that end, one of our interviewees explained that "a lot of these things should happen in the system, a lot of non-verbal things, like when it should speak and how it should speak and where it should look at and so on" (P04).Previous research has also found that visual cues [62] or emoticons [63] may be used to enhance such conversational behavior.This type of intelligent interaction behavior leads to the perception of responsiveness and availability, and enriches a conversation by respecting prevailing values and manners.P08 for example states that we need to asks ourselves "what kind of values are we building and what values do we have ourselves; it's like with children-you say whatever you want but they see you, how you are, and so they act accordingly; which means that agents should probably learn this from interacting with themselves as well as with other humans" (P08).
The number of relevant statements per participant are shown in Table 5. Differences between interviewees from the machine-learning/AI field and those from the socio-ethics field, were not found (t = 0.37, p = 0.72).
Be Coherent
"I think when people think of a chat bot, when they interact with a bot, it's all kind of lumped together into one; [...] this human-like, and authenticity, and genuineness, and trustworthiness, and all of that; I think this is all kind of grouped together in people's minds."(P11) Our interview data shows 22 instances referring to the characteristic of coherence.Coherence with respect to authentic agents describes the context and social awareness capability to memorize, to relate to common experiences and to establish common ground.Statements highlight that reacting to and through context, as well as making sense of social situations respecting the reactions of participants (both real and artificial) is a crucial attribute of authenticity.Also, an agent has to relate to prior experiences with human and artificial interlocutors.In doing so it creates a timeline of interactions and a respective memory of interactions, which allows for the acknowledgement of different standpoints and thus supports coherent and reproducible streams of argumentation."You have to add a timeline during the conversation, so your answer can not only be based on what was previously mentioned, like right before; it has to be like a follow-up, not like single interaction steps" (P02).To help with this, "the agent should be aware of some user dependent traits [...]; if it is aware of like, for example when the agent would be built into Facebook [...]; to use this social context information to build common ground".Using this information "it would know a lot of a user's personal history and thus could give much more proper feedback" (P02).
Relevant statement numbers are shown in Table 6.Also, with respect to being coherent, numbers did not differ between participants from the machine learning/AI field and those from the socio-ethics field (t = 0, p = 1).
Verification of Interview Insights
In order to verify the insights gained through the above described interview study we set up a survey targeted at experts in relevant fields.Questions were open ended and positioned along the five authenticity traits identified by the interviews and relevant literature (cf.Table 7).In addition, we asked to provide a rating on how competent one feels in providing answers to these questions (from 1 = low: I am a novice to 5 = high: I am an expert).A total of 40 academics from leading institutions in the US, Europe and Asia were approached via email and asked to complete the survey as well as rate the relevance of Linguistics, Artificial Intelligence, Software Engineering, Psychology, Social Science, Human-Computer Interaction, Philosophy, and Ethics for their field of work (from 1 = not relevant at all to 5 = very relevant).Selection of these experts was again based on their research interest and recent contributions to the topic.People who had already participated in the preceding interview study were not contacted again.Yet, the survey was sent to a local interest group which meets regularly to discuss recent developments in Artificial Intelligence.Although feedback has only been received from five people (cf.Table 8 for participants' background information and overall data on work relevance with respect to the above highlighted fields), all of whom gave their informed consent for including their data in scientific publications, the identified authenticity traits seem to validate.
The importance of transparency, for example, was highly endorsed.Although its degree of predictability may vary by task and context, it seems that particularly the understanding of data provenance and algorithms as well as deductive reasoning is what helps explain agent behavior.
As for the question whether a bot should mimic human behavior or rather develop its very own personality, answers show that such depends on the context.Systems whose primary goal is to represent an existing entity should exhibit respective behavior, although such may be perceived deceptive and thus potentially unethical (S02).On the other hand, systems which are strong enough to be perceived as independent entities, may very well develop their own personality traits, independent and distinct from their "mother institutions".An example of such could be seen in Alexa who, although created by Amazon, clearly aims for having an independent appearance (participants' overall competence rating with respect to answering questions regarding transparency: M = 3.20; SD = 1.79;Mode = 5; Median = 3).
With respect to learning, survey answers particularly highlight the need for an authentic agent to benefit from experience and further to change its communication behavior based on a given context.To that end, context awareness not only refers to the physical environment but also the speaker's anthropomorphizing: M = 3.00; SD = 2.00; Mode = 1; Median = 3).On the one hand some feedback shows that human-like AIs are not necessary (S04 and S05) and potentially unethical (S02) if one is simply interested in completing a task or solving a problem.On the other hand, S05 highlights Watzlawick's principles of communication, which greatly emphasize the relationship between message sender and receiver as being an influencing factor in the communication process [64].Although the type of relationship that is required may depend on the given context.That is, building customer loyalty might require a stronger social connection (and thus a more human-like agent) than providing information on the weather (S04).Such is also in line with recent work on rapport modeling and social reasoning used in agents to provide more personalized information to human interlocutors [27], to strengthen the relationship between the system and the user and thus foster the exchange of relevant information [45], or to maintain users' engagement [65].
In general it appears that the given task and context defines the expected conversational behavior, where the available ability level should range from answering simple questions to actively leading a dialog (S04).As for the application of conversational rules and values, feedback further shows that a truly authentic agent is able to distinguish between evidence-based statements (i.e., statements to the best of its knowledge) and guesses, and that it would make this distinction also transparent to its interlocutors (S01) (participants' overall competence rating with respect to answering questions regarding conversational behavior: M = 2.80; SD = 1.48;Mode = 3; Median = 3).
Finally, as for the coherence factor, the provision of similar answers to similar questions and the use of coherent language structures as well as levels of abstraction, seem to be key expectations (e.g., mentioned by S04).Authentic agents should be equipped with values so that arguments and affiliated reasoning strategies are based on facts, logic and transparent opinions (S03).To this end, even the ability to de-escalate was named as a requirement for authentic behavior (S05).Whether agents should be able to establish common ground (another core principle of human dialog), however, triggers ethical concerns, as such would imply a cultural, social and conversational equivalency, which simply does not (yet) exist (S02) (participants' overall competence rating with respect to answering questions regarding coherent behavior: M = 2.80; SD = 1.48;Mode = 3; Median = 3).
Discussion
The interview and survey results presented above show that authenticity in messaging agents is a multi-characteristic concept.As such it encompasses most, if not all aspects of what Albrecht describes as social intelligence [12].
First, authenticity relates to context awareness including context-aware learning, social context awareness as well as conversational awareness.This means that an authentic agent should react to and through context.Interactions should be context-dependent, which means that learning from context so as to maintain a conversation's meaningfulness, is crucial.This includes gathering information from different sources such as social media, daily interactions, behavioral patterns as well as sensors to which the agent may have access to.According to Wang et al. [58], the gathering of social information such as social relations, role structures or influencers of a social environment, is also an important agent task.While Albrecht lists situation awareness as a particularly important dimension of social intelligence [12], other authors focus more on the coordination of events and decisions within an environment when talking about agent intelligence [66,67].
With respect to context awareness, it is mainly the ability of keeping a timeline of interactions and the capability of memorizing events, actions and decisions which help increase an agent's authenticity.As Beverland found, links to the past are relevant to establish authenticity by creating a personal agent history, which also supports the grounding process [68].Establishing common ground is necessary as it preserves the relationship between agent and user, and adds to an upright and ethical conversation.Authenticity builds on these relationships rather than on task-orientation, which makes trust another critical constituent to the agent-user relationship.Both the interview data as well as the literature show, that task-orientation does not add to authenticity [69].Authentic agents establish trust rather through personal coherence or individual conversations with the user [70].
In addition to awareness and trust, experiences through and with the agent facilitates an overall authentic appearance.Learning from experiences and their context may even create a certain state of connectedness between a user and an agent.As described by Albrecht [12], Pine and Gilmore [36] as well as Chhabra [71,72] the continual challenge or cyclical process of being authentic is merely achieved through memorizing, learning from experiences and establishing connections.
Furthermore, our results show that when an agent learns to create individualized patterns, it should also explore the individual differences found in users.That is, depending on a user's gender, age, culture or personal traits, an authentic agents needs to become a type of persona the user wants to interact with; an entity which remains "real", true to itself and honest.Some of our interview participants described these as charisma building factors which have a great impact on the agent's authenticity.Others called it anthropomorphizing, similar to Pine and Gilmore [36] who argue that anthropomorphizing is relevant for the authentic perception of consumers when thinking of a brand.In addition, Albrecht lists "staying true to oneself" as one of the four dimensions of authenticity [12].
Since an agent should know and understand itself, it requires an internal sortation regarding values, attitude, integrity, and its origin.This need for understanding its presence and the integration of a determined value and belief system is also found in previous research conducted by Persson et al. [22], Peterson and Seligman [31], Albrecht [12], Turkle [30] and Gundlach as well as Neville [55].This helps when verifying an agent's origin and integrity.Beverland [68] and Beverland et al. [73] argue that agents may even connect through values and identity so as to become an inherent part of a given cultural setting, which comes back to culture-awareness being an essential part of agent authenticity.In other words, an agent should incorporate cultural behavior, characteristics as well as regulations.Through the integration of cultural models and by adopting common sense principles, the agent builds its identity within a community.Doing so, it may be judged by the same strategies of social intelligence as humans are [22].Also, moral assumptions about culture and given circumstances, and a personal code of conduct are considered factors influencing authenticity [12,74].
Finally, our interview and survey data highlights that creating transparent and predictable relationships increases the confidence level between interlocutors.Thus, an agent's purpose has to be transparent so as to allow for the verification of behavior, i.e., decision making and action taking.Interviewees highlighted that increased predictability and knowing the intention of an agent increases the likelihood of an interaction.As Molleda and Jain [75] show, the commitment to an objective is a core requirement for authentic behavior in organizations.Such also applies to authentic agents (in particular if they act as representatives of organizations).Albrecht [12] defines this as raison d'être which may be achieved by having a defined mission statement, objectives, priorities and an action-reaction value map.
Thus, according to our results it seems that combining characteristics of anthropomorphizing and having a transparent purpose defines the core requirements to be met by an agent which aims to be authentic.To that end, Groves [76] as well as Schallehn et al. [77] state that, the behavior of an authentic agent is guided by its identity.However, to create an authentic messaging agent, behavior also needs to incorporate conversational aspects.Those include conversational skills such as listening intently, handling the dialog, turn-taking, backslapping, and keeping attention.Furthermore, the agent should be able to keep the conversation meaningful and interesting.This may be achieved by incorporating context into the conversation and adjusting it to the agent's personal traits.Molleda calls this strategic communication efforts which have to be undertaken in order to create an authentic appearance [75].Previous research in social science further shows that being able to articulate ideas, thoughts, views and actions in a way so that others can understand them is another indicator of social intelligence and may thus also be required [12,22].
Being able to lead or take part in conversations pushes the agent further towards incorporating non-verbal conversational skills.These are defined as meta-verbal-cues, which include knowing when to pause, adapting the pace of the conversation, following the trend of the conversation, showcasing availability, attention as well as responsiveness.Also, the agent should be able to communicate both synchronously as well as asynchronously, and it should have the ability to question a user's input.Our interviewees also support previous research in that conversational behavior includes high quality language skills, the incorporation of meta-verbal-cues as well as the mastery of Darwinian buttons [12,30,78].
The ability to understand culture and circumstances as well as to create an agent persona requires the capacity of learning from experience.As agents need to be capable of handling both requests coming from the user as well as self-triggered actions, they often need to connect to APIs, interfaces, and other types of networks.This would allow for the outsourcing of abilities that are not frequently used or those that do not require significant storage and/or computing power.Showcasing this type of distributed agent, Choi and Yoo propose a way of communication between humans and agents using instant messaging and a unified platform for communication and cooperation [79].This approach is similar to distributed AI, where the intelligence is contained in objects situated in different locations [80].While in general such autonomy would support an agent's ability to learn from experience, it lacks supervision and so it may not be clear which direction an agent would take and how relevant information would be integrated into the system, potentially offering manipulation and deception caused by faulty data as well as data neglect.
Although to err is human in an agent it may be interpreted as non-authentic behavior.Error in this context is defined as performing unpredictable or wrong actions, not acting according to its purpose, persona or communication strategies or acting within a wrong cultural model.Through learning agents may, however, understand and adopt human problem-solving strategies to prevent error.As with human education, they may create an action-value-map that shows how certain actions or behaviors are perceived.This map should be constantly updated and expanded, essentially depicting the agent's personality.
In summary we may therefore argue that our interviews with 12 experts and a confirmatory survey with 5 participants point to a manifold concept of agent authenticity in which five key characteristics seem particularly prominent.Those characteristics are:
•
To have a transparent purpose: i.e., an agent has to showcase its intent and decision making in a transparent way so as to make its actions understandable and predictable.Acting transparently and creating confidence by representing the creator or originator of the agent allows for keeping the conversation under control.
•
To learn from experience: i.e., an agent requires two-way learning strategies where it autonomously learns from cultural, behavioral, personal, conversational and contextual interaction data.
•
To anthropomorphize: i.e., an agent has to act as a persona including personal values, attitudes, and culture to establish a relationship with a user.This includes the building of individualized experiences as well as the creation of trust.
•
To show strong conversational behavior: i.e., an agent should incorporate relevant communication strategies to successfully handle dialogs and adapt non-verbal interaction behavior such as intelligent reasoning and decision making (cf.recent work on rapport modeling and social conversational strategies [27,45,65]).
•
To be coherent: i.e., an agent has to keep up with a conversation and relate to previous elements and experiences.Furthermore, in order to build common ground, it has to be aware of the digital and natural context of the conversation and its (different) conversation partners.
While these five characteristics are generally interrelated, our interview and survey results highlight that coherence and learning from experience are the main contributors to authenticity.That is, keeping track of the conversation connects conversational behavior and coherence.Furthermore, building common ground aims to generate individualized trust and creates a relationship between interlocutors (both human and artificial).Representing a persona is tightly connected to the purpose of the agent as well as its internal sortation.Acting predictable and transparent creates trust.Finally, so as to close the circle, learning from the conversational behavior and interactions helps the agent advance its communication capabilities, and cultural and conversational awareness supports the development of an agent persona.
Conclusions
The results of the studies presented in this paper add to the existing body of knowledge in several ways.A primary contribution of our work concerns a theoretical definition of authenticity related to agent technology achieved through the conceptualization of constructs found in artificial intelligence as well as in socio-ethics.In doing this, we believe we contribute to theory building in the area of socio-ethics for emerging technologies.As mentioned by Shoemaker and colleagues [81] theoretical constructs are the building blocks for social science theory.In our study we found that authenticity may be defined as the interplay of learning by experience, anthropomorphizing, having a transparent purpose, showing (advanced) conversational behavior and being coherent.This insight should work as a guideline with which we believe, researchers will be able to improve socio-ethics for emerging technologies.
Another relevant contribution of our work should be seen in its connection of two research streams which, to our knowledge, has not been done in this way before.Although the combination of socio-ethics and emerging technologies is an existing research field, our analysis tried to interlink an expert-focused understanding of authenticity related to socio-ethics and high-level concepts inherent to the development of artificial agents.That is, our work focused on identifying characteristics and factors which will hopefully lead to the development of more authentic intelligent agents.In doing so, our study found that the creation of such authentic agents relies on several, highly interconnected characteristics.These interrelations further indicate complex connections within as well as between characteristics, which makes their harmonization an important challenge of future agent development efforts.
In summary we may thus argue that the presented studies should count as the beginning of a journey towards a better understanding of agent authenticity.We do recognize that the generated insights may suffer from a certain participant bias (note: all interview and survey participants have had previous contact and relevant experience with the development of artificial agents or have been researching within this field) and from the fact that they did presume the existence of a binary problem space (i.e., what is authentic vs. what is not) while authenticity might rather be measured on a continuum.Future work should thus counter these limitations as well as strengthen their insight.For example, the rather theoretical conceptualization of authentic agent behavior has to be developed further so as to eventually reach the state of a more profound theoretical framework.The dynamic association between the five characteristics may guide this theory building.Also, the interplay of these characteristics may be an interesting path for future explorations, as well as a user's understanding of authenticity so as to evaluate the benefit from cultivating such authentic behavior.Future efforts may further expand the authenticity concept to other contexts by, for example, investigating the interplay of brand authenticity and agent authenticity for marketing purposes.Finally, while our work focused on messaging agents, future ambitions may apply these findings to voice, haptic and other interaction channels, which may eventually lead to the type of authentic robots Alan Turing had in mind when proposing his concept of the universal machine [13].
Table 1 .
Interview participants and their respective areas of expertise.
Table 2 .
Number of statements per interviewee related to being transparent.
Table 3 .
Number of statements per interviewee related to learning from experience.
"It needs a pretty good knowledge base about what it's like to be a human."(P03)
Table 4 .
Number of statements per interviewee related to anthropomorphizing.
"It should write like me, pause like me, and talk like me." (P09)
Table 5 .
Number of statements per interviewee related to behaving conversational.
Table 6 .
Number of statements per interviewee related to being coherent.
|
2019-02-19T14:07:05.891Z
|
2018-09-17T00:00:00.000
|
{
"year": 2018,
"sha1": "8b9d657716b446072f39f3475628a54a63088336",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2414-4088/2/3/60/pdf?version=1537179176",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8b9d657716b446072f39f3475628a54a63088336",
"s2fieldsofstudy": [
"Computer Science",
"Sociology",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
247025800
|
pes2o/s2orc
|
v3-fos-license
|
Low-Metallicity Globular Clusters in the Low-Mass Isolated Spiral Galaxy NGC 2403
The globular cluster (GC) systems of low-mass late-type galaxies, such as NGC 2403, have been poorly studied to date. As a low mass galaxy (M$_{\ast}$ = 7 $\times$ 10$^{9}$ M$_{\odot}$), cosmological simulations predict NGC 2403 to contain few, if any, accreted GCs. It is also isolated, with a remarkably undisturbed HI disk. Based on candidates from the literature, Sloan Digital Sky Survey and Hyper Suprime-Cam imaging, we selected several GCs for follow-up spectroscopy using the Keck Cosmic Web Imager. From their radial velocities, and other properties, we identify 8 bona-fide GCs associated with either the inner halo or the disk of this bulgeless galaxy. A stellar population analysis suggests a wide range of GC ages from shortly after the Big Bang until the present day. We find all of the old GCs to be metal-poor with [Fe/H] $\le$ --1. The age--metallicity relation for the observed GCs suggests that they were formed over many Gyr from gas with a low effective yield, similar to that observed in the SMC. Outflows of enriched material may have contributed to the low yield. With a total system of $\sim$50 GCs expected, our study is the first step in fully mapping the star cluster history of NGC 2403 in both space and time.
INTRODUCTION
The globular cluster (GC) systems of galaxies is an area of active research (see review by Forbes et al. (2018) and references therein). This is particularly true for low-mass galaxies, which have relatively poor GC systems compared to giant galaxies with their populous GC systems. While the Local Group contains some well-known low mass galaxies with modest GC systems, such as M33, the LMC and the SMC, the study of their GC systems is in some ways complicated by their proximity. In the case of M33 its radial velocities of its GCs (∼ -180 km/s) overlap with high velocity stars of the Milky Way; the recent work of Larsen et al. (2021) found that several claimed GCs, in previous studies of M33, are actually foreground Milky Way stars. The Magellanic Clouds are of course in the process of merging with the Milky Way and it has been speculated for some time that they may have 'lost' GCs to our Galaxy (Lynden-Bell & Lynden-Bell 1995). Thus we need to look further afield to improve our understanding of the GC systems of such low mass galaxies.
While GC systems, in general, include both in-situ and ex-situ (accreted) formed GCs (Forbes & Remus 2018), low mass galaxies are expected to be dominated by in-situ formed GCs e.g. Oser et al.
★ E-mail: dforbes@swin.edu.au (DAF) (2010) and Remus & Forbes (2021). In particular, galaxies with a total halo mass of M ℎ < 10 11 M are expected to be dominated by in-situ formed GCs (Choksi & Gnedin 2019). This is in contrast to massive galaxies like the Milky Way, the halos of which are dominated by the accreted GCs of disrupted dwarf galaxies (Forbes 2020).
NGC 2403 is a low mass (M = -19.5, M * = 7 × 10 9 M ), late-type (Scd) spiral galaxy in the outskirts of the M81 group at a distance of 3.2 Mpc (Carlin et al. 2016). It may be infalling for the first time into the M81 group (Williams et al. 2013) with M81 itself lying ∼1 Mpc away. It has a satellite galaxy DDO 44 (M = -12.5), and recently a smaller dwarf, called MADCASH-1 (M = -7.7), was discovered by (Carlin et al. 2016). Although DDO 44 shows signs of interacting Carlin et al. (2019), NGC 2403 itself reveals a remarkably undisturbed HI disk (de Blok et al. 2008;Williams et al. 2013). This, and its relative isolation, suggests it is a galaxy that has not undergone a strong tidal interaction nor experienced ram pressure stripping. Although bulgeless, it does host a nuclear star cluster (Lira et al. 2007) and it reveals an extended stellar structure at large radii which is either an extension of the thick disk or the presence of a faint stellar halo (Barker et al. 2012). It is also an important Cepheid variable calibrator galaxy (Saha et al. 2006).
NGC 2403 has some global properties similar to those of the bulgeless Local Group galaxy M33 (Scd,). M33 contains a large population of intermediate-age star clusters, particularly at large galactocentric radii (Chandar et al. 2002;Davidge 2003;San Roman et al. 2010;Fan & de Grijs 2014) with 85% of them kinematicallyassociated with the inner halo and 15% associated with the disk (Chandar et al. 2002). Chandar et al. (2002) found that the inner halo GCs in M33 reveal a significant age spread when compared to those of the Milky Way. A key difference is that M33 has signatures of an inner stellar halo but not an outer one (Gilbert et al. 2021). This may indicate a relatively small contribution from accreted dwarf galaxies in M33's assembly history. In this context, we note at least one halo GC (HM33-B) which Larsen et al. (2021) argued was accreted on the basis of its anomalous chemical signature.
The GC system of NGC 2403 has been poorly studied to date with very little work on it since the photographic plate study of Battistini et al. (1984). In this work they identified 8 candidates as having colours typical of GCs, 6 star clusters with bluer colours and 5 with redder colours than typical Milky Way GCs. Davidge (2007) also identified another half dozen GC candidates in their infrared imaging and estimated their sizes. NGC 2403 does not appear in the catalogue of Harris et al. (2013) listing galaxies with known GC systems (it lists M33 to have a total of 50 GCs) and its total number of GCs is unknown. Based on its total halo mass of 2-3×10 11 M (Li et al. 2020), and the relation of Burkert & Forbes (2020), we predict a total GC system of 40-50 members. Based on a typical GC specific frequency for spiral galaxies, over a range of luminosity, of S ∼ 1 (see Lomelí-Núñez et al. 2021), around 60 GCs are expected. As well as having an undefined number of GCs, we are unaware of any published high quality spectra for confirmed GCs in this nearby galaxy. Thus the fraction of disk vs halo GCs and whether its halo GCs resemble those of old halo GCs of the Milky Way, or its younger accreted GCs are unknown. The age-metallicity relation (AMR) of NGC 2403's GC system, which potentially holds important clues on the galaxy's assembly history, is currently undefined.
Here we obtain Keck Cosmic Web Imager (KCWI) spectra for 9 GC candidates around NGC 240 finding 7 to be confirmed GCs. Along with one other GC in NGC 2403 observed using HIRES on the Keck telescope, bringing the total to 8, we discuss their stellar population properties and compare them to those of SMC/LMC, M33 and the Milky Way. At a distance of 3.2 Mpc (Davidge 2003), 1" corresponds to 15 pc. Figure 1. Difference between the band magnitudes measured on a 30s HSC exposure of NGC 2403 with 3-pixel and 12-pixel apertures, 3 − 12 , as a function of the i band PSF magnitude. Point sources occupy the locus at 3 − 12 ∼ 0.35. More extended objects containing excess flux (relative to the PSF) exhibit larger 3 − 12 values, with those between 0.5 and 0.9 (dashed lines) the best candidates for GCs. We highlight as blue dots the two dozen candidate GCs selected based on all of our criteria, open green squares highlight the six candidate GCs with matches to our HSC catalog, and the orange diamond highlights the new candidate, JC15, that we followed up spectroscopically (see section 2.2).
Figure 2.
Gaia parameters "astrometric excess noise" (upper panel) and "BP/RP excess" (lower panel) as a function of Gaia G magnitudes for objects in our HSC imaging. The curves in each panel are the GC candidate selection criteria derived by Hughes et al. (2021) to select Cen A clusters. Symbols are as in Figure 1. . We note that NGC 2403 has an RV of 133 km/s and the receding side of the galaxy stellar/HI disk (i.e. higher RVs) is the SE side. A 14.7' × 14.7' inlay is included to indicate the position of JD2 which is ∼ 1.38 deg North-North-West of NGC 2403 near the dwarf galaxy DDO 44. At 3.2 Mpc, 1' = 0.93 kpc. We find D2 and JD2 to be foreground stars.
GLOBULAR CLUSTER CANDIDATES
Our GC candidates for follow-up spectroscopy have been taken from the literature and supplemented by our own data.
Literature Candidates
From Battistini et al. (1984) we include their objects C1, C4, F1 and F16 and from Davidge (2007) we include D2 and D6. According to Davidge (2007), several GC candidates are partially resolved in their near-IR imaging. They measured extended sizes for D2, D6, F1, F16 and F46. The candidates C1, C4, F16 and F46 are clearly resolved GCs in archival Hubble Space Telescope Advanced Camera for Surveys (HST/ACS) imaging. We have estimated the effective radius of each of these GCs after subtracting in quadrature the ACS point spread function (based on several; stars in the image). Our radii are listed in Table 1, with F46 from Larsen et al. (2021). While we find a similar size for F16 (2.2 vs 2.7 pc), we can rule out F46 having a size of 10.4 pc, as found by Davidge (2007), based on the ACS imaging. For D2 they estimated a small but extended size of 1.8 pc. We also examined the other candidates listed in Battistini et al. (1984) with available HST/ACS imaging but concluded that are unlikely to be GCs.
New Candidates
We used the characteristics of the candidates from Battistini et al. (1984) and Davidge (2007) to help select additional GC candidates for spectroscopic follow-up. In particular, using SDSS DR7 we searched for slightly resolved objects with -band Petrosian radius < 2 arcsec, -band magnitude in the range 17 < < 19, colour in the range 0.7 < − < 1.1 and 2.0 < − < 3.5. Two of these candidates were targeted for follow-up spectroscopy, which are named JD1 and JD2. In the case of JD2, it was not selected to be a GC candidate of NGC 2403 as it lies at a projected distance of ∼75 kpc (and hence greater in physical distance), but it does lie near the dwarf satellite galaxy DDO 44 and might be associated with it (given its mass, a couple of GCs might be expected).
We selected further candidates from imaging data obtained with Hyper Suprime-Cam (HSC) on the Subaru 8.2m telescope. These data were gathered as part of the Magellanic Analogs Dwarf Companions and Stellar Halos (MADCASH) survey, a deep, wide-area search for dwarf satellites of nearby Magellanic Cloud analogs. In addition to the deep exposures of NGC 2403 first presented in Carlin et al. (2016), short (30s) exposures on each field were also taken to prevent bright GC candidates from saturating. Here we use these g and i band images taken on 2016/2/9, with seeing of 0.89 and 0.59 , respectively. After combining the g and i band source catalogs, we matched them to Gaia EDR3 (Gaia Collaboration et al. 2016Collaboration et al. , 2021, selecting the best match within 1 of each source. We then selected all objects with magnitudes 17 < < 20.5, colours 0.5 < ( − ) < 1.3, and at least 4 from the centre of NGC 2403 (to avoid the most crowded regions of the images). We further removed all objects whose Gaia proper motions or parallaxes are measured with > 3 significance, as GCs at ∼ 3 Mpc distances should not have measurable proper motions or parallaxes.
After these initial selection criteria, we explored aperture magnitudes from the HSC data, and quality parameters from Gaia that are known to efficiently select extended objects. Because we have a sample of known NGC 2403 GCs, we can confirm that any selection criteria we implement effectively select known GCs. Figure 1 shows the difference between the HSC i band magnitudes measured in apertures of 3-pixel and 12-pixel radii, which we label 3 − 12 , as a function of PSF magnitude. Objects that are very extended (i.e., galaxies) will have a lot of flux beyond their 3-pixel radius, and thus have high values of 3 − 12 , while stars should have a small amount of flux beyond a 3-pixel radius, which will be primarily determined by the extendedness of the PSF (and thus roughly the same for all stars). The locus of stars is at 3 − 12 ∼ 0.35 in Figure 1. Extended galaxies will be well above this locus, while GCs, which are only slightly extended, will have intermediate flux ratios in this figure.
We note that 3 candidates do not appear in Figure 1. They are: JD1 which is undetected because it lies in a crowded region of the NGC 2403 disk, JD2 which is beyond the HSC field of view and F1 as it does not have reliable measurements from Gaia (it nonetheless appears extended according to its measured 3 − 12 value). Candidate D2 has flux ratios in this figure consistent with being a point source, and also a large measured proper motion and is thus likely a foreground star. Figure 2 displays the Gaia parameters "astrometric excess noise" (AEN) and "BP/RP excess", which have been shown by Hughes et al. (2021) to effectively select objects around Cen A that are slightly more extended than the PSF (i.e., candidate GCs), as a function of Gaia G magnitude. We overlay and adopt the curves from Equations 1 and 2 from Hughes et al. to separate candidate extended objects from point-like sources.
While this selection yields two dozen candidates, upon visual inspection, we found many of these are obvious background galaxies. Additional work is therefore required to create a comprehensive list, over the entire luminosity function and spatially, of the 50 plus GC candidates expected around NGC 2403.
For the best candidate, which we name JC15, we have obtained a spectrum (see below). JC15 lies close to the confirmed GCs D6 and F16 in the selection figures. Most importantly, JC15 shows signs of extendedness in 3 − 12 , but does not have extremely large flux ratios. It appears round in the i band HSC images, with some hints of semi-resolved faint stars at the outskirts.
Candidates for Follow-up Spectroscopy
Our final sample of GC candidates for which we obtain spectra are given in Table 1. It lists the g band magnitude and − − colour taken from SDSS or HSC imaging. The quoted colours are not corrected for Galactic extinction nor internal extinction within NGC 2403. Based on their projected positions on the galaxy disk, JC15 may experience the most internal extinction (it is the reddest object) and D6 the least. We also include in Table 1 (and subsequent tables) the confirmed GC F46. This Battistini et al. (1984) GC candidate, was subsequently confirmed by Larsen et al. (2021) (hereafter L21) using the HIRES instrument at the Keck Observatory. Fig. 3 shows an image of NGC 2403 with the locations of our GC candidates indicated. Other than JD2, all of our GC candidates follow a flattened disk structure along the major axis. Given the lack of a bulge in NGC 2403, we expect confirmed GCs to be associated with either the disk or the inner halo. In Fig. A1 we show a montage of SDSS, or if available HST, 'postage stamp' images of each GC candidate (including F46 by L21).
Exposure times ranged from 10 to 40 mins under 1-2" seeing conditions. We used the medium field-of-view and the BH3 grating (with a spectral resolution R = 9000) for C4, D2, D6, JD2. For C1, F1, F16 and JD1 we used the large field-of-view and the BL grating (R = 900). Standard stars were observed in the same setups as the target objects. Some lower S/N spectra were also obtained for some of these candidates but they are not used in this current work.
The data were processed using the standard KCWI data reduction pipeline. We took the non-sky subtracted, standard star calibrated cubes and cropped them to the wavelength range common to all slices to use them for the remainder of our analysis. The globular clusters were sky subtracted by taking an appropriately sized region Original spectra, shifted to rest wavelength, are shown in black and our model fits in magenta. The key absorption lines used for the analysis are shown in vertical lines (e.g., H , Fe5015, Mgb and Fe5270), as labelled in the top panel. Note that different KCWI grating setups were used in the observations, and thus the grey area corresponds to wavelengths not covered by the shorter grating, the BH3 . centred on each cluster and subtracting off an on-chip offset region as 'sky'. Where multiple exposures were taken on the same object these were mean stacked to produce a single spectrum for each GC. Final integration times for each object were: 1440s for C1, 2400s for C4, 1200s for D02, 1200s for D06, 1200s for F1, 1440s for F16, 1200s for JD1, 900s for JD2 and 900s for JC15. Table 2 lists the candidate coordinates, signal-to-noise ratio of each spectrum and our measured radial velocity (with a barycentric correction applied) with typical uncertainties of ∼10 km s −1 . All of the measured radial velocities are consistent with those expected from NGC 2403, however they are also within the velocity range of Milky Way stars.
The GC candidates shown in Fig. 3 are colour coded according to their measured radial velocity (RV). The general rotation of the stellar and HI disk from Fraternali et al. (2002) shows that the SE side of NGC 2403 is receding at 250 km −1 while the NW side is 'approaching' us at 10 km s −1 (the maximum disk rotation velocity is around 130 km s −1 ). The mean RV of NGC 2403 is 133 km s −1 . On the basis of measured velocities relative to the galaxy's velocity field we suggest that D6, F1 and JD1 (and F46) are GCs likely associated with the halo rather than the disk of NGC 2403. Given these GCs all lie within a projected distance of 13 kpc they may be more akin to inner halo GCs as seen in the Milky Way's GC system. The objects consistent with the general disk rotation are C1, C4, F16 and JC15. Although the dwarf galaxy DDO 44 reveals a tidal stream due to likely interaction with NGC 2403 (Carlin et al. 2019) it has a recession velocity of 255 km s −1 and so JD2 with a velocity of ∼0 km s −1 is very unlikely to be associated with DDO 44. We conclude that JD2 is a likely foreground star. Candidate D2 has a blue colour, a near zero radial velocity, a similar flux in 3 and 12 pixels in our HSC imaging and Gaia parameters consistent with a Milky Way star. So despite the extended size for D2 given by Davidge (2007), we conclude that it is actually a foreground star.
The status of each GC candidate is summarised in Table 2, with 7 being confirmed GCs and 2 as foreground stars. We will therefore no longer consider these stars further. Fig. 4 shows the KCWI spectra for the final sample of 7 confirmed GCs (and highlights the different wavelength scales used for the observations). The key absorption lines are indicated.
STELLAR POPULATION ANALYSIS
To analyse the stellar populations of the 7 GCs, we follow the method outlined in Ferré-Mateu et al. (2021) for compact stellar systems. Briefly, we use the MILES Single Stellar Population (SSP) library (Vazdekis et al. 2010) models with the BaSTI isochrones, considering templates that range from metallicities of [Z/H] = -2.42 to +0.40 dex and that cover ages from 0.03 to 14 Gyr, assuming a universal Kroupa IMF. The SSP models include different alpha element ratios, from solar to super-solar (Vazdekis et al. 2016). The work of Mendel et al. (2007) compared the SSP models of Vazdekis et al. to other SSP models and found good consistency for Milky Way GCs. We run pPXF using multiplicative polynomials and applying a regularization value that ensures that the resulting star formation history is the smoothest possible while maintaining a realistic fit. The best fit SSP model using pPXF (Cappellari 2012) for each GC is overplotted in Fig. 4. For the results, we adopt the means of the mass-weighted stellar population parameters. However, for the [Fe/H] metallicity that is measured directly from the absorption line indices. For the [ /Fe] ratios we employ the Mg and Fe5015 lines (which are common to all spectra; these same pairs were employed by the SAURON survey of early-type galaxies; Peletier et al. (2007)). We find that only JD1 has a clear super-solar alpha element enhancement and, in this case we employ the corresponding alpha-enhanced, rather than base, MILES models. Otherwise the GCs are consistent with a solar alpha element ratio. Table 3 presents the mean mass-weighted stellar populations that we measure for the ages and metallicities with uncertainties. Quoted uncertainties are determined using a Monte Carlo technique in which we have varied different ingredients in the pPXF analysis, (e.g. polynomials, degree, regularization, alpha vs. base models) to determine the overall uncertainties. Stellar masses are calculated from -band magnitudes and the resulting mass-to-light ratio from the SSP analysis. We note the very young age of the cluster C1 (400 Myr) and the strong Balmer absorption lines in its spectrum. Our SSP analysis reveals no evidence for a hidden old stellar population for this GC. Although we continue to refer to C1 as a GC, some prefer the 'young massive cluster' nomenclature.
For F46, we list the stellar population parameters determined independently by Larsen et al. (2021), i.e. their assumed age and measured [Fe/H] metallicity (they found [ /Fe] = +0.15). Their assumed age, of 13 Gyr, can be considered an upper limit. For the stellar mass we assume the same ratio as JC15 (i.e. M/L ∼ 2) which gives a stellar mass of 11 × 10 5 M in reasonable agreement with the dynamical mass calculated by L21 of 15 × 10 5 M .
Using the stellar parameters determined from our SSP analysis, we predict − colours for our GCs. These predicted colours are compared to the measured colours, corrected for Galactic extinction (A = 0.11), in Fig. 5. The predicted and Galactic extinction-corrected colours are in reasonable agreement, thus providing further confidence in our stellar population analysis. Correction for internal dust extinction within NGC 2403 for the disk GCs could improve the agreement further.
Before discussing our stellar population results it is worth reviewing the potential biases and caveats in the object selection and data analysis. Both the literature and our own selection of GC candidates are naturally biased away from the inner disk due to the possible confusion with young star clusters and HII regions. This might tend to bias our sample against the selection of metal-rich disk GCs.
The stellar masses calculated for our sample of GCs (see Table 3) are all but one above 10 5 M . With a mean GC mass of around 2×10 5 M for the Milky Way's GC system, this indicates that our sample is drawn from the most massive or brightest (for comparison the GC turnover magnitude is M ∼ -7.2) of the GCs in NGC 2403. We note that selecting the more massive GCs from the Milky Way's system would create a bias towards selecting accreted GCs (Mackey & van den Bergh 2005).
There is the possibility of contamination in our GC spectra by emission lines associated with star formation in the disk of NGC 2403. If present, H emission would tend to fill in the H absorption line resulting in an older inferred age if not corrected for. If contamination from emission lines was present, we would expect to see the [OIII] line at 5007 Å -a visual inspection reveals no clear evidence for [OIII]. Furthermore, there is no evidence for emission lines from our pPXF analysis of the entire spectrum. Thus we conclude that our spectra do not suffer from such contamination.
Moreover, the observed GCs may contain blue horizontal branch (BHB) stars. These hot stars can give the impression of a young spectral age while having limited effect on the derived metallcitity and alpha elements (Conroy et al. 2018). The study of Mendel et al. (2007) examined the effects of BHB stars on the derived stellar population parameters from integrated spectra and found a limited impact when a large spectral range was used. We thus consider that the ages of those GCs with a shorter baseline should be considered lower limits, as without higher quality and/or longer baseline spectra we are unable to rule out the presence of BHB stars.
RESULTS AND DISCUSSION
In this work we obtain spectra, radial velocities, stellar population parameters (age, metallicity and alpha abundances) and stellar masses for 7 confirmed GCs associated with NGC 2403. One additional GC, F46, was previously observed using HIRES at the Keck Observatory by Larsen et al. (2021) and we include it in our subsequent analysis. They derived a radial velocity (confirming association with NGC 2403), a low metallicity and a mildly enhanced alpha element ratio. Previously, Battistini et al. (1984) managed to obtain low quality spectra of 3 GC candidates (C3, C4 and F21). For C3 they noted similarities to a A-type spectrum indicating an age of around ≤1 Gyr. For the only object in common with this study, C4, they described the spectrum as being that of a classical GC (which we confirm). For F21 they concluded it was a background galaxy. The stellar populations, and possible biases/caveats, for the Keckobserved GCs are summarised in Section 4 and Table 3. Bearing in mind that 8 GCs out of a possible GC system of around 50 may not be representative, we discuss our sample of high-mass disk/halo GCs in the context of GC systems in other nearby galaxies below.
Our data indicate that the GC system of NGC 2403 was formed over an extended period of time from early in the Universe until today. The GCs have near solar alpha-element ratios with one presenting super-solar enhancement. Compared to the two subpopulations of the Milky Way's GC system, the old GCs studied here are all relatively metal-poor ([Fe/H] ≤ -1); only the very young cluster C1 is metalrich. We note that Davidge (2003) also measured a low metallicity of [Fe/H] = -2.2 for field stars in the outer halo (projected ∼22 kpc along the minor axis, beyond our most distant GC) of NGC 2403. This metallicity is comparable to those for the oldest GCs in our sample.
In Fig. 6 (Chandar et al. 2006) which has a metallicity of [Fe/H] = -0.65±0.16 (Sarajedini et al. 2000). We also show the mean values for a dozen old M33 GCs (i.e. age = 10.35±0.71 Gyr and [Fe/H] = -1.12±0.09) from Beasley et al. (2015). We refer the reader to Larsen et al. (2021) for arguments that some of the Beasley et al. objects may actually be foreground stars (their exclusion does not significantly change the mean values). For the Magellanic Clouds we take the reliable ages and metallicities from the appendix of Horta et al. (2021), i.e. those with a confidence code of 1. We take ages and metallicities for Milky Way GCs from Forbes (2020). In the Milky Way, the in-situ formed bulge and disk GCs formed over a short time period (1-2 Gyr) at early times (∼ 13 Gyr ago), whereas the younger GCs are all the result of ex-situ formation and subsequent accretion of their host dwarf galaxy (Forbes 2020).
Our observed NGC 2403 GCs in Fig. 6 are coded by whether we have assigned them to the halo or to the disk of NGC 2403 on the basis of their relative velocities (see Table 2). We find no clear distinction between disk and halo GCs in terms of their average age or metallicity. The (disk) GC F46 is shown with an upper age limit of 13 Gyr as assigned by L21. Given that NGC 2043 has little or no bulge (similar to M33), we do not expect bulge GCs to be present, as they are in the Milky Way. At a given age, the NGC 2403 GCs observed in this study have metallicities most similar to those of the SMC, slightly less than those of M33 and the LMC, and systematically lower than GCs accreted from dwarf galaxies onto the Milky Way. In terms of the age distribution, the Magellanic Clouds, M33 and NGC 2403 all appear to have formed star clusters from early times (∼ 13 Gyr ago) until just a few Gyr ago. We remind the reader of C3 in NGC 2403, which Battistini et al. (1984) claimed it has an Atype spectrum indicative of a ≤1 Gyr old GC. Furthermore massive (∼ 10 5 M ), young (∼0.1 Gyr) star clusters have been identified by Larsen & Richtler (1999) to be forming up to the present day in NGC 2403.
To further discuss the age-metallicity distribution of our observed GCs, we also show in Fig. 6 the age-metallicity relation (AMR) for a leaky-box chemical enrichment model. The chemical enrichment models assume non-enriched gas 13.5 Gyr ago and an effective yield of (i.e. elements returned to the ISM for subsequent star formation after accounting for outflows of enriched material), which is related to stellar metallicity by the equation: [Fe/H] = -ln ( /13.5) Such a model provides a good representation for the GCs from disrupted satellites in the Milky Way. Indeed, this aspect of dwarf galaxy GC systems was used by Kruijssen et al. (2019) and Forbes (2020) to associate individual Milky Way GCs with disrupted satellites. The yields for the five largest disrupted satellites, including the Sgr dwarf, were found to range from 0.22 to 0.35. The model shown in Fig. 6 has = 0.1 and is very similar to the one derived from the stars in the SMC (M * = 0.4 × 10 9 M ) from Leaman et al. (2013).
The steepness of an AMR (i.e. the metallicity at a given age), as represented by the yield, is primarily driven by the mass of the host galaxy. Given that the AMR for NGC 2403 appears to be lower than that of dwarf galaxies accreted onto the Milky Way and perhaps also the GCs of M33 (which has a similar luminosity to NGC 2403), other factors could be at play. One factor that may act to lower an AMR is the outlfow of enriched material. In a study of nearby spirals and dwarf galaxies, Dalcanton (2007) concluded that "Metalenriched outflow is therefore the only viable mechanism for producing galaxies with low effective yields." The presence of an extended 'HI halo' around NGC 2403 led Fraternali et al. (2002) to suggest that it may be the signature of a galactic fountain. This outflow from the disk to the halo may contribute to the low effective yield as argued by Dalcanton (2007). Furthermore, there is a step change in the effective yield to lower values at rotation speeds of V ≤ 150 km s −1 (Garnett 2002). NGC 2043, with V ∼ 130 km s −1 falls close to this key transition.
Cosmological simulations predict that galaxies of NGC 2403's mass have not undergone significant accretion (e.g. in the model of Choksi & Gnedin (2019)) and only in-situ formed GCs are expected. Indeed there are several arguments against NGC 2403 having experienced a massive merger event in the past, which would mean few, if any, accreted GCs. NGC 2403 is relatively isolated and may only be infalling into the low mass M81 group for the first time (there is no evidence of ram pressure stripping). It also reveals a remarkably uniform regularly rotating HI disk (de Blok et al. 2008). Its old stellar disk has a uniform metallicity of [Fe/H] ∼ -1 (consistent with our most metal-rich GCs), indicating that it is well-mixed, and has an unbroken surface brightness profile (Williams et al. 2013). This suggests a relatively undisturbed galaxy despite its probable interaction with the dwarf satellite DDO 44 (Carlin et al. 2021). Davidge (2003) found evidence for AGB stars (ages ≤ 1 Gyr) beyond the disk of NGC 2403 and argued that they formed in-situ and not from accretion. We also note that F46 does not have the anomalous chemical signature indicative of an accreted GC (Larsen et al. 2021). While we cannot rule out accreted GCs, our sample of GCs is likely formed, over many Gyr, from in-situ gas.
CONCLUSIONS
In this work we studied a sample of relatively high mass globular clusters (GCs) associated with the low mass, bulgeless galaxy NGC 2403. Based on GC candidates in the literature, and new candidates selected from SDSS and our Hyper Suprime-Cam imaging, we obtained spectra of nine candidates using the KCWI instrument at the Keck Observatory. We confirm seven of them to be GCs associated with NGC 2403, with the remaining two likely to be foreground stars.
Our results are supplemented with one additional GC observed using HIRES, also from Keck, by Larsen et al. (2021).
By comparing the radial velocities of the eight confirmed GCs with the overall rotation of the HI disk, we assign four GCs to the disk and four to the halo. Single stellar population model fits indicate that the GCs were formed over many Gyr (with the caveat that the presence blue horizontal branch stars can make GCs appear much younger than they are). We find most to be consistent with solar alpha element ratio. However, the old GCs are all found to be metalpoor, i.e. [Fe/H] ≤ -1. The one young, massive cluster we obtain a spectrum for is more metal-rich. Cosmological simulations predict that few, if any, accreted GCs will be found in galaxies of the mass of NGC 2403. Based on the lack of disturbance in the stellar and HI disk, and its relative isolation, both the disk and halo GCs of NGC 2403 likely formed over an extended period from in-situ gas.
Our NGC 2403 GCs are systematically lower in metallicity at a given age compared to the Milky Way's GC system and perhaps also to the GCs in M33. However, they exhibit a similar age-metallicity relation to the GCs of the Small Magellanic Cloud. The age-metallicity relation of the NGC 2403 GCs can be approximated by a leaky-box chemical enrichment model with a similar effective yield than that inferred for the SMC stars. Outflows of enriched material may explain the relatively low yield inferred for NGC 2403's GC system.
|
2022-02-23T06:47:39.729Z
|
2022-02-21T00:00:00.000
|
{
"year": 2022,
"sha1": "62d2188ba0277bc5c13a637a843511d1b77938be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "62d2188ba0277bc5c13a637a843511d1b77938be",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248128681
|
pes2o/s2orc
|
v3-fos-license
|
A qualitative examination of apathy and physical activity in Huntington’s and Parkinson’s disease
Aim: In Huntington’s disease (HD) and Parkinson’s disease (PD), apathy is a frequently cited barrier to participation in physical activity. Current diagnostic criteria emphasize dissociable variants of apathy that differentially affect goal-directed behavior. How these dimensions present and affect physical activity in HD and PD is unknown. Methods: Using a qualitative approach, we examined the experience of apathy and its impact on physical activity in 20 people with early-manifest HD or idiopathic PD. Results: Two major themes emerged: the multidimensionality of apathy, including initiation or goal-identification difficulties, and the interplay of apathy and fatigue; and facilitators of physical activity, including routines, safe environments and education. Conclusion: Physical activity interventions tailored to apathy phenotypes may maximize participant engagement.
P u blis h e r s p a g e : h t t p:// dx.d oi.o r g/ 1 0. 2 2 1 7/ n m t-2 0 2 1-0 0 4 7 < h t t p:// dx.d oi.o r g/ 1 0. 2 2 1 7/ n m t-2 0 2 1-0 0 4 7 > Pl e a s e n o t e: C h a n g e s m a d e a s a r e s ul t of p u blis hi n g p r o c e s s e s s u c h a s c o py-e di ti n g, fo r m a t ti n g a n d p a g e n u m b e r s m a y n o t b e r efl e c t e d in t his ve r sio n.Fo r t h e d efi nitiv e ve r sio n of t hi s p u blic a tio n, pl e a s e r ef e r t o t h e p u blis h e d s o u r c e.You a r e a d vis e d t o c o n s ul t t h e p u blis h e r's v e r sio n if yo u wi s h t o cit e t hi s p a p er.
Thi s v e r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wit h p u blis h e r p olici e s. S e e h t t p://o r c a .cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s.Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s .
Huntington's disease (HD) is a dominantly inherited neurodegenerative disorder that leads to motor dysfunction, cognitive impairment and neuropsychiatric changes.The disease is caused by the production of an abnormal protein that is especially toxic to the striatal cells of the basal ganglia [1].In Western populations, HD is estimated to affect 10.6-13.7 people per 100,000.Similar to HD, Parkinson's disease (PD) is a neurodegenerative movement disorder caused by an unexplained degeneration of the nigrostriatal dopaminergic neurons of the basal ganglia [2].PD is more common than HD, affecting an estimated 428 per 100,000 of 60-69 year olds, with prevalence increasing steadily with age [3].HD and PD are diseases of the basal ganglia characterized by degeneration of the cortico-striatal-thalamic circuits.In healthy people, these circuits facilitate high level cognition and voluntary motor function [4,5].InHDandPDthe degeneration of the basal ganglia leads to physical changes that detrimentally affect daily functioning [6,7].T hese include changes to balance, gait and postural stability.Physical activity programs are thus especially important in the management of neurodegenerative movement disorders to promote physical, social and emotional wellbeing.
Interventions that promote physical activity in HD and PD, alongside emotional and cognitive engagement, have been trialled across inpatient, outpatient and community settings [8][9][10][11].Most research interventions focus on short-term gains, and do not prioritize the long-term maintenance of health behaviors.Similarly, limited resources mean that clinical interventions are often targeted and time limited [12].The long-term efficacy of physical activity interventions thus depends on the compliance and engagement of the participant, who must independently adhere to their program long-term to maximize the benefits.Long-term reliance on participant adherence, however, poses a challenge for people with neurological conditions in which apathy is a common symptom, such as HD and PD.
Alongside disturbances in motor function and cognition, people with HD and PD experience behavioral disturbances, of which apathy is one of the most common [13][14][15].Apathy is defined as a reduction in goal directed behavior associated with reduced interest, motivation and emotional reactivity [16,17] that contributes to lower quality of life, less functional independence, cognitive decline and greater caregiver burden [18,19].In early manifest HD, up to 63% of people report clinically relevant apathy [20], and in PD, clinically elevated apathy affects approximately 50% of people [15,21].The high prevalence of apathy in HD and PD is attributable to altered cortico-striatalthalamic circuits, as these pathways are important for initiating, maintaining and reinforcing motivated goaldirected behavior [22,23].Current diagnostic criteria of apathy across psychiatric and neurological disease emphasize dissociable variants that differentially affect goal-directed behavior, including behavioral (difficulty initiating action), cognitive (difficulty identifying a goal and associated plan of action), emotional (blunted emotional responses) and social apathy (social withdrawal) [16,17,24].How these apathy subtypes manifest and whether they relate differentially to physical activity in HD and PD is unknown.
Clinical assessment of apathy is typically based on self-or partner-rating scales, or clinician interview [25].S elfand partner-report can enable tracking of apathy across time; however, this approach lacks information about the context in which apathy is experienced or observed and provides no account of apathy in a person's own words.Patient-centered accounts of apathy will facilitate more accurate measurement of apathy subtypes and inform physical activity interventions targeted to HD and PD populations; these accounts may be best achieved using qualitative methods but to date have been underutilized.Only two studies have employed a qualitative framework to examine apathy in PD [26,27], but the manifestation of apathy subtypes or their impact on physical activity was not explored.Likewise, qualitative studies of physical activity in HD and PD [28] focus on the general barriers and facilitators of physical activity, without consideration to non-motor symptoms, such as apathy, that may affect participation [29].
The aim of our exploratory study was to understand the subjective experience of apathy subtypes and how they influence engagement in physical activity.We included people with HD and PD because these neurodegenerative diseases share pathology of the basal ganglia and apathy is a commonly reported but poorly understood symptom in both diseases [17,30,31].For the purpose of this study, we designed and implemented a semi-structured interview schedule and administered the Lille Apathy Rating Scale (LARS) [32], which is a clinician-administered structured interview of apathy.We then analyzed these interviews using qualitative methods, led by two overarching questions: how does the experience of apathy across its different dimensions affect engagement in physical activity behaviors?and which behavioral strategies compensate for apathy and enable physical activity?
Design
We used a phenomenological qualitative research design to explore participant's lived experience of apathy and its subtypes and how these experiences effected their engagement in physical activity.To do so, we developed a semi-structured interview schedule, which we administered alongside the LARS.
Participants
Recruitment sources included a participant registry at Teachers College, Columbia University and word-of-mouth.Four participants with HD were approached in person at the time of their clinic appointment at the Huntington's Disease Society of America (HDSA) Center of Excellence at New York State Psychiatric Institute (NYSPI).We recruited 20 people diagnosed by a neurologist with manifest HD (n = 10) or idiopathic PD (n = 10).We included participants in the early to middle stages of disease (see Table 1 for demographic and clinical information) and excluded those with late or advanced stage disease.For HD participants, we determined disease stage using the Total Functional Capacity (TFC) score of the Unified Huntington's Disease Rating Scale (UHDRS) [33].F orPD participants, we determined disease stage using the Hoehn and Yahr Staging criteria [34].Late and advanced disease for HD participants was defined as a (TFC) score of less than seven [33], and in the case of people with PD, a Hoehn and Yahr stage of greater than three [34].We obtained medical information pertaining to disease stage from the clinical recruitment site or via our internal participant registry at Teachers College.
We used a purposive sampling approach to ensure equal representation of people with HD and PD.PD participants were on average of 12.9 years older than those with HD (p = 0.009) and had more years of education (p = 0.002; Table 1).All participants ambulated independently but reported some form of cognitive or physical impairment, or mood-related changes that interrupted their ability to perform certain activities (e.g., pain, slowness in movement, reduced dexterity, delayed reaction times, balance, anxiety, problem solving).Individual characteristics of participants with HD and PD are presented in Tables 2 and 3, respectively.Technical failure in the recording of two interviews resulted in missing transcripts from the qualitative analyses for two participants, both from the HD sample.This technical failure resulted in a final HD analytic sample of eight.All participants provided informed consent in accordance with the Declaration of Helsinki.Ethical approval was obtained from NYSPI and Teachers College, Columbia University Institutional Review Boards.
Procedure
We interviewed participants in person (n = 9) or remotely via audio/video conferencing (n = 11).In person interviews were conducted at the Neurorehabilitation Research Laboratory at Teachers College, or at the HDSA Center of Excellence at NYSPI.All interviews were audio recorded.KJA conducted the interviews, which ranged from 60 to 90 min in duration and followed our pre-planned interview schedule, which served as a checklist and guide toward the topic area for discussion.To facilitate recall, the interview schedule was built around specific episodes of activity or sedentary behaviors.This minimized the executive load required of participants, thereby limiting the impact of possible cognitive impairment on interview responses.Broadly, the question schedule aimed to establish: the lived experience and motivation for physical activity; the lived experience of sedentary activity and the reason for sedentary behavior; and, the subjective impact of apathy on physical activity.At the completion of our interview schedule, we administered the LARS.We then analyzed these interviews using qualitative methods, led by two overarching questions: How does apathy across its different dimensions affect engagement in physical activity behaviors; and which behavioral strategies compensate for apathy and enable physical activity?
Lille apathy rating scale
The LARS [32] is a clinician-rated, semi-structured interview that assess four domains of apathy: intellectual curiosity, action initiation, emotion and self-awareness.Administration of the LARS includes 33 items read aloud to the respondent.The first three items are rated on a five-point Likert type scale, whereas the clinician evaluates the remaining 33 items on a binary yes/no scale with an not available (NA) rating option for responses that are non-classifiable.The LARS produces a total apathy score ranging from -36 to 36, with lower scores indicative of more apathy symptomatology.The developers suggest that a cutoff score of -16 or less is indicative of clinically relevant apathy [14,32].
Data analysis
We analyzed the verbatim responses of participants from both our interview schedule and the LARS, using the inductive thematic analysis approach described by Braun and Clarke [35].Interviews were transcribed verbatim by KJA who subsequently re-reviewed them for accuracy.Analysis followed five stages.In the first stage, KJA carefully read all 18 transcripts to promote familiarization with the data.Throughout this stage, KJA used notes to record ideas for codes and themes, which informed the analyses.Second, transcripts were re-read in detail and coded, using an open-coding framework based on the principles of thematic analysis [35].In the third stage, KJA and LQ sorted the generated codes into potential sub-themes.LQ is an experienced physical therapist with a research program specializing in physical therapy interventions.Charting and mind-maps were employed in the third stage to establish links between sub-themes and to visualize how each code fit within the broader theme.Fourth, KJA and LQ reviewed the codes to determine any overlap or repetition across codes, and whether any codes could be collapsed to more concisely describe the data.During this stage, we also considered whether we could improve the descriptors used to capture the content of each code or overarching sub-theme, and where relevant, we adjusted the descriptors accordingly.Finally, the sub-themes were collated into two major themes that encapsulated their overarching meaning and that related to the research questions of interest.
To ensure the accuracy of our interpretations, we employed analyst triangulation and peer debriefing.This included a sample of cross-coding, alongside multiple ongoing discussions between KJA and LQ about the generated codes and the identified themes.Through these discussions, we determined alternative interpretations and ensured data saturation was reached prior to the preparation of results.The interpretation of the major themes and the sub-themes within were discussed with all authors who together have extensive research and clinical experience in HD and PD.Discrepancies in interpretation were discussed as a group and resolved with the oversight of KJA and LQ.We used the qualitative data analysis package NVivo (QSR NVivo for Macintosh, version 2.0., QSR International Pty Ltd) to assist with summarizing and analyzing the data.
In addition to our qualitative analyses, we used the total apathy score derived from the LARS to characterize the severity of apathy in the sample.
Qualitative results
Our qualitative analysis identified two major themes: the impact of multidimensional apathy on physical activity, and strategies to facilitate engagement in physical activity.To effectively convey their meaning, we deconstructed our major themes into six sub-themes (see Figure 1).Major theme one consisted of the following three sub-themes: 1a) apathy leads to difficulties in initiation in both PD and HD; 1b) people with HD have difficulties identifying goals for action; and 1c) fatigue influences physical activity and is not easily distinguished from apathy.Major theme two consisted of the following three subthemes: 2a) the use of prompts, routine and structure to promote physical activity; 2b) creating a safe social environment facilitates exercise engagement, however resources for people with HD appear to be lacking compared with PD; and 2c) The psychological response to a diagnosis affects engagement and psychoeducation after diagnosis may promote physical activity.We describe each sub-theme below, with examples of key quotes from participants.
Major theme one: the impact of multidimensional apathy on physical activity
Apathy leads to difficulties in the initiation of activity in both HD & PD Both HD and PD participants described difficulty in initiating activities they believed would be positive for their physical and mental health.These descriptions were both overt, "I don't know, if it's just me and I'm lazy or if it's just part of the disease, but I have difficulties sometimes getting myself off the ground and up off the couch and doing things" (HD03), and indirect descriptions of wanting to "get out" and not doing so until prompted by a friend or spouse: "I feel bad, I feel sad sometimes, because. . .I'll be home, I might put on the T.V . . .then that's it. . .but as soon as she texted me, I thought 'ooh' this is a really good day to get out" (HD04).
Many HD and PD participants identified activities or behaviors they wanted to engage in but described difficulty initiating the activity: "we've always talked about [walking] and we definitely could go out and do it after work.But for some reason you just get home and you're just not particularly motivated" (HD03).Difficulties in initiation were especially prominent after a disruption to established routines, which ranged from relatively minor changes, such as a holiday, to major events, such as moving to a new house or losing a job.
People with HD have difficulties identifying goals for action, which may reflect cognitive apathy Particularly for people with HD, a reduced interest or curiosity was evident, which was often associated with a difficulty identifying goals or activities of interest.Although these participants did not directly indicate a lack of interests, they responded vaguely and were unable to draw on any recent examples of self-generated, spontaneous activity.For example, when asked what they typically did after work, one HD participant reflected: "Gohome... generally go home and ah. . ." (HD05).Subsequent prompting of his hobbies provided little further information and when asked whether he was interested in new things he replied: "Hmmm...doesn't faze me...don't really care."Difficulties in goal identification were also exemplified by participants who described apathy as a sense of indecision, of being "stuck and caught" and not knowing how to proceed, possibly reflecting a difficulty in mentally elaborating on a plan of action [17,36].
Fatigue influences physical activity & is not easily distinguished from apathy People with HD and PD tended to describe apathy in a way that was conceptually similar to fatigue.One participant with PD defined apathy as "everyday activities [become] difficult, tiring, emotionally more difficult, and harder", and a participant with HD described the experience of apathy as "running out of steam".Both peripheral (difficulty maintaining physical activity at desired level of intensity) and central fatigue (subjective experience of fatigue) were prominent symptoms reported by PD participants (see Chaudhuri and Behan [37] for a review of central and peripheral fatigue).In PD, reports of fatigue were typically discussed in the context of medication in which participants reported less energy on either side of their routine dosage.Peripheral fatigue described as muscle tiredness from physical exertion was reported (e.g., arms feeling tired) as was daytime sleepiness, which was attributed to sleep difficulties.Participants also made reference to central fatigue and how this often arose at the prospect of managing their symptoms, formulating plans, initiating activities or maintaining positivity.For example, when discussing the emotional experience of a single day, PD18 reflected: "you may be feeling some fatigue becauseof...theeffortneededtobuilduponmotivation".Several participants described feeling mentally fatigued in the morning, which dissipated once they had commenced their daily activities.The subjective experience of fatigue was also related to the perception of the activity whereby enjoyable activities were associated with more self-reported "motivation" and subsequently less fatigue, for example, "It's funny, when I go to the gym I get bored very easily and I get tired much faster, when I go for dancing I can dance for three hours, and never get tired.It's amazing, depends on your attitude actually" (PD15).
Major theme two: strategies to engage physical activity
The use of prompts, routine & structure to promote activity By far the most common strategy to overcome symptoms of apathy was the use of external prompts, adopted by both people with HD and PD.Prompts included reminders from family or friends, as well as electronic reminders set with smartphones: "I guess cues always help. . .so sometimes [it helps], if somebody else has prompted me"; (PD18).Participants also engaged cues from the environment, such as committing to an activity at a certain time of the day or agreeing to do the activity as part of a social outing.Participants who were the most physically and socially engaged described having schedules in place that provided structure to their lives and was especially prominent in PD participants.Schedules included weekly commitments to support groups, exercise classes, community groups and volunteer programs: "During the day normally I can go for classes, all sorts of classes, I've got my dancing classes, and my underwater exercise classes, yoga . . ." (PD15).These schedules appeared to provide the structure and routine needed to circumvent initiation deficits, whereby participants were not required to internally generate the activity.Importantly, the schedules of active participants tended to be long standing habits, adopted prior to the onset of their disease.
Creating a safe social environment facilitates exercise engagement, however resources for people with HD are lacking comparedwithPD Participants with positive social supports tended to engage in more activity than those who were socially isolated or whose family members were also inactive, suggesting that community and environmental contexts may serve as a cue to overcome deficits in both initiation and goal generation.The significance of social supports was reiterated by participants who described the social setting of their activity as extremely important.For example, PD11 reflected: "there is a social aspect, which can 't be ignored because it's also a part of it...I could spend part of my day with other people or I could spend the entire day sitting alone in the apartment", emphasizing community, inclusion and social acceptance as a vital component of initiating activity.This theme reflected that engagement in physical activity facilitated more than just physical health but also provided a sense of mental wellbeing: "Ihadtofindawaytoget myself motivated and Dance for PD came along and the people who I dance with are really such a remarkable group of people, community, that organisation, the dancers, they're like family there" (PD15).Whereas PD participants cited many community groups tailored to people with PD, only one HD participant discussed involvement in a HD support group.Most HD participants were not involved in community, support or social groups specific to people with HD.
The psychological response to a diagnosis affects engagement & psychoeducation after diagnosis may promote physical activity
This theme captured the variability in people's responses to the onset of symptoms and prognosis.Compared with the HD group, participants with PD often reflected that their diagnosis was a 'shock' and described a period of adjustment where they sought to regain normality: "I want to do it [exercise] because it makes me feel more like a human being, you feel more normal. . . in spite of my limitations. . . it makes me feel like a real person and that is something I lost considerably" (PD15).Despite certain symptoms threatening continued engagement in activity, for example "there are certain things that are very frustrating, straining, using arms, things that reinforce the fact that you've got a disability, and things which are actively difficult to do but that in the past have been easy to do" (PD15), most PD participants expressed the importance of regaining control through physical activity.For example, almost all PD participants indicated that their physical activity was partially motivated by the belief that exercise may slow the progression of their disease: "the thing that motivates me. . . is I don't want to be sicker and I don't know whether there is any hope of that not happening, but I'm going to do everything possible to make it harder" (PD16).In contrast, only four participants with HD cited the importance of physical activity in relation to their disease and few cited physical activity as disease modifying.
Summary of LARS data
Across both HD and PD participants, 13 people (65%) scored in the clinical range on the LARS.Of these 13, seven (70%) had PD and 5 (50%) had HD.Individual scores are presented in Figure 2.
Discussion
In this study, we explored the experience of multidimensional apathy in people with HD and PD and how these experiences affected engagement in physical activity.We found that people described apathy in distinct ways and utilized several strategies to overcome apathy to engage in physical activity.Positively, the qualitative descriptions of apathy that we report also emerged in our quantitative LARS data, with more than half of the sample scoring in the clinical apathy range.
Our study is the first to qualitatively examine how people with HD and PD conceptualize and experience symptoms of apathy in the context of physical activity.By investigating how people experience and describe their own symptoms we provide a valuable contribution to a growing body of literature attempting to refine definitional frameworks of apathy [16,24].Such frameworks are essential because understanding the lived experience of apathy and the context in which the syndrome occurs is needed for accurate measurement and early identification in clinical settings.Participants described examples of behavioral and cognitive apathy that are consistent with current definitional frameworks [16,17].In both PD and HD, apathy occurred commonly as a difficulty in initiating activities.HD participants also demonstrated the difficulties in goal identification that occur in cognitive presentations of apathy [17,38].Although emotional apathy is included in theoretical frameworks of apathy, this element did not emerge strongly from our data.The absence of emotional or social apathy however warrants further investigation to understand how these aspects of apathy affect people with HD and PD.Rather than reflecting an absence of these apathy symptoms, people with HD or PD may lack awareness into their own emotional or social blunting attributable to social cognition deficits [39][40][41][42].Of course, this interpretation is speculative and definitive conclusions are beyond our scope.
In our study, we also observed significant overlap in the subjective experience of mental fatigue and apathy, whereby the mental energy required to commence an activity was described as inherently fatiguing.As a result people with HD and PD tended to perceive fatigue and apathy as intertwined states, rather than distinct clinical symptoms or research constructs.This tendency has been reported elsewhere [19,43], but is not sufficiently considered in current apathy consensus criteria.There have been several attempts to characterize mental fatigue in PD [44], whereas little research has addressed how fatigue affects people with HD.A more nuanced understanding of how apathy and fatigue differentially relate to physical activity and how their current definitional constructs overlap and diverge will be essential for more refined diagnostic criteria, measurement tools and the subsequent monitoring of the two states in behavioral interventions.
As well as understanding the subjective experience of apathy, our study also highlighted several strategies that may promote engagement in physical activity.Participants who described being highly physically active tended to be those who had structured schedules that served as external prompts for activity.These prompts served to circumvent initiation deficits, as participants were not required to internally generate their activity.External prompts are likely to facilitate behavior in most people, independent of disease, however external prompts appear especially important for people with neurodegenerative diseases in which apathy is prominent [45,46].Importantly, the schedules of active participants tended to be long standing habits, adopted prior to the onset of symptoms.These findings suggest that to reduce the impact of apathy, care providers may seek to engage participants early in the disease course, or in the case of HD, the pre-manifest period, before apathy becomes a prominent feature.
Implementing a structured routine early in the disease course may assist people in maintaining engagement later in the disease, when generating ideas or selecting goals becomes more difficult and the initiation of new behaviors becomes increasingly cognitively demanding [47][48][49].Our study also suggested that educating people about the importance of physical activity is a critical first step of any community-based intervention to foster continued engagement and that this may be especially important for people with HD who, in our study, were not as engaged with services or as health literate, as people with PD [12,29].Although it was not the focus of this study, improved service provision for people affected by HD may be necessary to help this community of people engage socially and physically.
The role of executive dysfunction in the manifestation of cognitive apathy must be acknowledged as a limitation of this study, given the evidence that cognitive apathy and cognition share aspects of their neuropathology [5,17].Although a comparison of apathy and cognition is beyond the scope of the present study, there is likely to be a complex interplay between the cognitive aspects of apathy and the progressive cognitive impairment characteristic of neurodegenerative diseases, particularly in HD [45][46][47]49]. Our study sought to focus on the lived experience of apathy and as a result we did not include caregivers.We nonetheless acknowledge that some participants may not
Figure 1 .
Figure 1.Visual schematic of the two major themes comprised of six subthemes.Orange shaded elements (left) refer to major theme one and subthemes within.Green shaded elements (right) refer to major theme two and subthemes within.The bidirectional arrow between apathy and fatigue indicates the overlap between these two distinct psychological constructs.HD: Huntington's disease; PD: Parkinson's disease.
Figure 2 .
Figure 2. Total Lille Apathy Rating Scale scores across Huntington's and Parkinson's disease participants.Vertical dashed line indicates LARS clinical cutoff score.Scores less than -16 are indicative of symptoms meeting clinical threshold.HD: Huntington's disease; LARS: Lille Apathy Rating Scale; PD: Parkinson's disease.
Table 1 .
Summary of participant demographics and self-report measures (means [standard deviation]).
Table 2 .
Overview of Huntington's disease participants' disease and functional information.
HD0259-year-old male; 14 years of education; stage I Single; living independently; unemployed HD03 46-year-old male; 16 years of education; stage I Married; living at home with spouse; working full-time HD04 43-year-old female; 14 years of education; stage I Married; living at home with spouse and young child; working full-time HD05 53-year-old male; 20 years of education; stage I Single; living independently; working full-time HD06 66-year-old male; 15 years of education; stage II Single; living independently; retired HD07 44-year-old female; 16 years of education; stage II Married; living at home with spouse; retired HD08 73-year-old male; 21 years of education; stage I Married; living at home with spouse; volunteering part-time HD09 49-year-old male; 12 years of education; stage I Single; living independently; working full-time HD10 46-year-old female; 12 years of education; stage I Married; living independently with spouse; working full-time Stage: Total Functional Capacity score.
Table 3
. Overview of Parkinson's disease participants' disease and functional information.Participant Age, gender and stage Descriptive information PD11 66-year-old male; 21 years of education; stage II Married; living at home with spouse; working part-time; PD12 77-year-old female; 12 years education; stage II Living with partner, retired PD13 73-year-old male; 21 years of education; stage I Married, living at home with spouse; working part time PD14 57-year-old female; 21 years of education; stage I Married, living at home with spouse and adult child; working full-time PD15 59-year-old male; 18 years of education; stage II Single, living independently, retired PD16 79-year-old female; 16 years of age; stage I Married, living at home with spouse, retired PD17 55-year-old female; 18 years of education; stage I Married, living at home with spouse; working part-time PD18 57-year-old male; 21 years of education; stage II Married; living at home with spouse and adult child; working-full time PD19 63-year-old female; 21 years of education; stage I Married; living at home with spouse; working full-time; PD20 73-year-old male; 18 years of education; stage II Living alone; retired Stage: Hoehn & Yahr disease staging.
|
2022-04-14T06:23:53.439Z
|
2022-04-12T00:00:00.000
|
{
"year": 2022,
"sha1": "51e31f17cac9c8a8df86c18dc77bb9853b1719d0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2217/nmt-2021-0047",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ebe6ea9a49f1f3f04f15e741897699d6b74e1e7d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237639700
|
pes2o/s2orc
|
v3-fos-license
|
Power Losses Minimization for Optimal Operating Maps in Power-Split HEVs: A Case Study on the Chevrolet Volt
The power-split architecture is the most promising hybrid electric powertrain. However, a real advantage in energy saving while maintaining high performance can be achieved only by the implementation of a proper energy management strategy. This requires an optimized functional design before and a comprehensive analysis of the powertrain losses after, which could be rather challenging owing to the constructive complexity of the power-split transmission, especially for multimode architecture with multiple planetary gearing. This difficulty was overcome by a dimensionless model, already available in the literature, that enables the analysis of any power-split transmission, even in full electric operation. This paper relies on this approach to find the operating points of the internal combustion engine and both electric machines which minimize the total power losses. This optimization is carried out for given vehicle speed and demanded torque, by supposing different scenarios in respect of the battery capability of providing or gathering power. The efficiency of the thermal engine and the electric machines is considered, as well as the transmission mechanical power losses. The aim is to provide a global efficiency map that can be exploited to extract data for the implementation of the most suitable real-time control strategy. As a case study, the procedure is applied to the multi-mode power-split system of the Chevrolet Volt.
Introduction
In the last decades, stricter environmental policies undertaken against increasing global warming have encouraged the spreading uptake of hybrid electric vehicles. Besides the earliest and most popular series and parallel hybrid architectures, more and more automotive companies-first and foremost Toyota and General Motors-are developing the power-split hybrid electric powertrain [1][2][3][4][5].
The power-split layout combines the benefits of both series and parallel hybrid, resulting in a highly flexible system where the internal combustion engine (ICE) is kinematically decoupled by the wheels due to the operation of the electric unit. Two electric machines act as an active continuously variable unit (CVU), which can provide additional power for vehicle propulsion or gather the ICE power in surplus for battery recharging. In addition, regenerative braking and full electric vehicle (FEV) operation are achievable. The power flows within the powertrain are handled by a power-split unit (PSU) made up of planetary gear sets (PGs) and ordinary gear sets (OGs). Multi-PG PSU enables the minimization of the electric machines' power size by deploying a multi-mode power-split continuously variable transmission (PS-CVT), where some clutches operations lead to several constructive arrangements to select under the desired driving conditions [6][7][8][9][10]. Nevertheless, the more complex the transmission constructive layout is, the trickiest the identification of the occurring power flows is [11][12][13], as well as their management [14][15][16][17].
When implementing energy management strategies aimed at minimizing the powertrain power losses, the friction losses occurring in the transmission should be also considered. Nonetheless, because of the above-mentioned difficulties in multi-mode PS-CVTs
Dimensionless Parametric Approach for Voltec Analysis
The transmission system of the Chevrolet Volt, the so-called Voltec, is a PS-CVT with two PGs and three clutches which enable the multi-mode functioning. A third PG combined with a chain drive acts as a fixed-ratio OG in the final drive. Figure 1 shows the Voltec functional layout derived from [37] and previously described in [34,35]. The PGs and the final drive make up the PSU, which can be considered as a four-port device linked with the ICE (by the shaft in), the wheels (by the shaft out), and the electric motor-generator (MG) I (by the shaft i) and MG O (by the shaft o). The positive sign of the power flows is indicated by the arrows. The clutches C0, C1, and C2 are exploited to shift between different modes, as reported in Table 1. By engaging the only C2 clutch, the PG2 ring gear is braked to the frame, thus only PG1 acts as an epicyclic gear unit with non-proportional speeds of its branches. This realizes an input-split mode, mainly exploited for lower vehicle speeds. At higher speeds, a compound-split mode is achieved, by engaging only the clutch C1 which connects the PG2 ring gear to the MG I and the PG1 sun gear. It should be noted that engaging C1 and C2 simultaneously realizes a fixed-ratio parallel mode, where only MG O can be active and the ICE speed is univocally coupled to the vehicle speed. Moreover, by additionally engaging the one-way clutch C0, which locks to the frame the ICE and the PG1 ring gear, two FEV modes can be performed. However, as shown in Table 1, General Motors considers only the FEV operation derived from the input-split arrangement.
The schematization of any PSU as a four-port device is always valid, regardless of the actual PSU constructive layout. This enables the exploitation of some general relationships between speeds, torques, and powers of the main PSU external ports which can be applied to any PS-CVT [33,35,36]. On the other hand, the PGs and OGs constructive parameters and their arrangement within the PSU lead to the definition of the basic functional parameters which rule the equations of the unified parametric model considered in this article. The constructive parameter here used for the definition of the planetary gear sets is the Willis' ratio Ψ, defined as the ratio between the rotational speed of the ring gear and the one of the sun gear while the carrier is still. The Willis' ratios of the two PGs are Ψ 1 = −0.535 and Ψ 2 = −0.481, the fixed-ratio of the final drive is k f d = 0.379. The functional parameters of the Voltec input-split and compound-split modes were identified in [34]. These are the mechanical points τ #i and τ #o and the corresponding speed ratios τ o#i and τ i#o , listed in Table 2. The former are defined as the overall speed ratio τ = ω out /ω in achieved when the i or o shaft is motionless, respectively. In general terms, the corresponding speed ratio τ j#k is the j-th speed ratio τ j = ω j /ω in achieved when the shaft k is motionless. The mechanical points often coincide with the overall speed ratio at which a mode shift occurs, since one of the two electric machines can be turned off. Therefore, a parallel hybrid functioning is achieved at the mechanical points. Once known the parameters of Table 2, the dimensionless approach addressed in [35,36] can be applied to the Voltec to analyze both the power-split and FEV operation in terms of speed, torque, and power ratios, by including the evaluation of the PSU mechanical power losses.
Dimensionless Speeds, Powers and Mechanical Losses in Voltec Power-Split Operation
The Voltec PS-CVT in power-split operation was previously analyzed in [34,35]. To avoid repetition, this section includes only the outcomes of these previous applications, not the procedure implemented to obtain them. The results are shown in Figure 2. Starting from the functional parameters of Table 2, the speed ratio τ i = ω i /ω in between MG I and the ICE was computed as a function of the overall speed ratio τ, as well as the speed ratio τ o = ω o /ω in between MG O and the ICE. These are shown in Figure 2a for both input-and compound-split mode. The shift between one mode to the other occurs at the mechanical point τ = τ #i = 0.247. For τ = τ * = 0.379 both electric machines rotate at the same speed, therefore both PGs work at their synchronous condition. At the PGs synchronism, the PSU mechanical power losses ( Figure 2b) show a minimum, because the absence of relative motion between PGs branches eliminates the PGs friction losses. The mechanical power losses of Figure 2b were calculated as a fraction of the input power as a function of the overall speed ratio τ and the opposite of the overall power ratio η = −P out /P in . Note that η is not a global efficiency, but a variable exploited to model the possibility of the battery to provide or absorb power. Therefore, it can also be far higher than one, if the demanded output power is mainly provided by the battery rather than the ICE. The PSU losses enabled the calculation of the real power that the electric machines should provide to or collect from the PSU as a fraction of the input power (Figure 2c,d).
Dimensionless Speeds, Powers and Mechanical Losses in Voltec Full-Electric Operation
The relationships exploited in [34,35] to analyze the power-split operation were rearranged in [36] to model also the FEV functioning mode. These are exploited in this section to apply the unified parametric model to the Voltec in FEV operation for the first time. Nonetheless, to find the best balance between brevity and self-consistency of the paper, this section presents the only implementation of formulas that were introduced and explained in the previous works [33][34][35][36], to which we refer the reader for more information. However, Appendix A outlines the major features of the model to facilitate the understanding of the content of this section.
As the engine is inactive in FEV driving (clutch C0 is engaged), speeds and power can be more conveniently normalized to the output ones. Since the shaft in is motionless, the speed ratio between every electric machine and the shaft out is univocally defined as: The functional parameters used in Equation (1) are those related to the input-split mode, since it is the only mode exploited by General Motors to perform the FEV operation. The dimensionless PSU power losses can be computed by the fast black-box method proposed in [35] adapted to the FEV analysis, as described in [36] and summarized in Appendix A. The considered efficiencies of the final drive (η f d = 0.953) and of the PGs in fixed-carrier functioning (η 0 = 0.96) are the same as those assumed in [35]. The total mechanical power losses are the sum of those occurring in the final drive, calculated as: (2) and those occurring in the PGs: The parameters used in Equations (3) and (4) are indicated in Table 3, while in Equation (3) p out,1 is the portion of p out flowing in PG1, which can be computed as the difference between the power flowing into the final drive (p out /η f d ) and its portion flowing in PG2 (p out,2 , see Equation (A4) in Appendix A for its calculation), as follows: Table 3. PGs reference notation, fixed-Z speed ratios, and fixed-Z efficiency [35,36].
Note that p out = p out = −P out /P out = −1 by definition. By considering p o = p o = −P o /P out as the independent variable, the total power losses p L can be swiftly computed by summing Equations (2)-(4). The dimensionless power flowing in the other electric machines can be calculated by the PSU real power balance: The results of the dimensionless analysis to FEV operation are shown in Figure 3.
Identification of the Optimal Operating Maps
To carry out the analysis addressed in Section 2, it is sufficient to know only the constructive and functional layout of the PS-CVT. This allows the calculation of the PSU speed ratios as well as the dimensionless mechanical power that electric machines should provide or absorb, by considering the PSU friction losses. More specifically, in power-split operation, the dependent variables, which are normalized to ICE speed or power, can be assessed by freely assigning the speed ratio and the power ratio between the output and the input port ( Figure 2). On the other hand, in FEV operation, the speed ratios are univocally defined, thus the dependent variables, which are normalized to the output power, can be determined by freely supposing the power ratio between one PSU port connected to an electric machine and the output port ( Figure 3). In other words, for a given vehicle speed (directly related to ω out ) and for a given demanded torque, it is necessary to assume a specific functioning point of the ICE (in PS operation) or of one electric machine (in FEV operation) to univocally determine speeds and powers (and thus torques) on each PSU port.
Obviously, the output torque and the wheels' speed can be considered independently one from the other, but if the output torque is higher or lower than the driving resistance, the powertrain work in dynamic operations, and the inertia of the vehicle and the propulsors should be considered in the analysis. In this article, for computational simplicity, the steady-state operation is analyzed, whereby the output torque is a function of the vehicle speed (V veh ), thus the power delivered by the powertrain is: where m, f r , C d , A f are the Chevrolet Volt parameters reported in Table 4, α is the road slope in radians, g = 9.81 m/s 2 is the gravitational acceleration, and ρ a = 1.225 kg/m 3 is the air density. It is worth noting that m is the sum of the unladen vehicle mass m 0 and the mass of passengers, fuel, or any other additional load. Therefore, m and α depend on the driving conditions. Nonetheless, it is sufficient to modify these values upstream of the procedure described in this section so as to obtain data referred to any steady-state driving condition. Once fixed a vehicle speed, the functioning point of the ICE in power-split operation or of one electric machine in FEV operation has to be selected so as to determine τ = V veh /(R w ·ω in ) and η = −P out /P in , or p o = −P o /P out , respectively. To investigate all the viable powertrain functioning points for given vehicle speed, the whole range of operation of the ICE or MG O has to be explored. Therefore, the efficiency maps of the propulsors are needed (Figures 4 and 5). These are necessary also for evaluating the propulsors efficiency in each achievable working point.
Calculation of Optimal Operating Maps in Power-Split Operations
For a given vehicle speed V veh and subsequent output power P out , the powertrain operation can be analyzed by considering each couple of ICE speed and torque (ω in , T in ) ranging from their minimum and maximum values within the ICE operation range of Figure 4. This leads to the calculation of an overall speed ratio matrix, having the same dimension of the ICE efficiency map, where each element is τ = V veh /(R w ·ω in ). A corresponding overall power ratio matrix containing η = −P out /(ω in ·T in ) can be obtained, too. Hence, these matrices can be used to interpolate the dimensionless results of Figure 2, in order to identify the speed and power ratios of the electric machines for each combination of τ and η. Then, these ratios can be multiplied by the corresponding ω in and P in to assess the dimensional rotational speed of the electric machines (ω i , ω o ) and their mechanical power (P i , P o ). In this way, the operating point of both electric machines is univocally determined, as is their efficiency which enables the calculation of the net electric power flowing to or from the battery as follows: The positive sign of the power flows is shown in Figure 1. Hence, by the simple procedure herein addressed, it is possible to obtain a set of matrices containing all data describing the powertrain steady-state operation, which can be exploited as a basis of the desired energy management strategy. As an example, in this paper, a procedure for pursuing maximum global efficiency is proposed. According to the direction of the battery power, P batt can be an output or input power in the powertrain. Therefore, if P batt > 0 the battery supports the ICE in the vehicle propulsion, and the global efficiency is: where P f uel = P in /η ICE is the power provided by fuel combustion. η ICE can be derived for each combination of ω in and T in from Figure 4. If P batt < 0 the ICE delivers power in surplus which can be used to recharge the battery, and the global efficiency is: Eventually, it is possible to extract the maximum value from the matrix η gl for a given vehicle speed and find the related ICE and electric machines operation resulting in the most efficient driving.
Nevertheless, the working points which violate a constructive constraint of propulsors, power converters, or batteries should not be included among the potential optimal ones. Therefore, the final operating maps do not include the functioning points whereby the ICE or electric machines operation is not included within the maps of Figures 4 and 5, or the electric powers overcome their respective maximum limits of power converters or batteries (Table 5). In fact, the real constraint on P batt depends on the battery state of charge (SOC): P max_charge and P max_discharge should be evaluated instantaneously by a deeper dynamic analysis. However, a simplified approach is exploited in the following to swiftly analyze four different scenarios, summarized in Table 6: • SOC comprised between its lower and higher thresholds (SOC = FREE); • Battery completely discharged (SOC = 0); • Battery completely charged (SOC = 1); • Maintaining-charge driving (SOC = CONSTANT).
Calculation of Optimal Operating Maps in FEV Operations
For given vehicle speed, in FEV driving the rotational speeds of the electric machines are univocally determined by Equation (1), therefore the only degree of freedom is the torque of one electric machine. As stated in Section 2, by choosing as the independent variable the torque of MG O, which can range from its minimum to maximum value corresponding to each rotational speed ( Figure 5), it is possible to consider an array with different values of p o = −(ω o ·T o )/P out . This can be used to interpolate the dimensionless results of Figure 3b and find the potential operating points of MG I for given P o and P out . Then, similarly to Section 3.1, P batt can be calculated by Equation (8) and the global efficiency in FEV operation is simply: In FEV operation, the battery SOC is supposed to be always sufficient to provide the demanded power.
Results and Discussion
The procedure described in Section 3 for calculating the optimal operating maps was implemented in MATLAB, after having carried out the dimensionless approach reported in Section 2. The following results were computed by considering a total vehicle mass equal to m = 1750 kg in plain (α = 0 rad). The analyzed vehicle speed ranges from 5 to 200 km/h.
Results in Power-Split Operations
The mesh grid used in input to the script was obtained by the arrays ω in = 1000 : 10 : 6000 rpm and T in = 10 : 1 : 140 Nm. The procedure described in Section 3.1 was carried out for each vehicle speed, whereby the respective optimal operating point was selected after excluding those outside the boundaries of Figures 4 and 5, Table 5, Equation (11), and Table 6. The results of Figures 6-11 show the optimal operating points resulting from the optimization procedure aimed at minimizing the powertrain power losses. Figure 6a shows that the best results are achieved for SOC = FREE since the availability of the battery both as a power source and power storage would enable the achievement of the most efficient power flows ( Figure 10). Nevertheless, the maximum global efficiency is lower than 0.33 up to 50 km/h, therefore it would be more advisable to turn off the engine and drive in FEV operation (see Section 4.2). The only reason to let the ICE work at lower speeds is to recharge the battery if possible (SOC = FREE or SOC = 0). In this case, the ICE should operate in the maximum efficiency region, otherwise, it should be turned off also for higher speeds if the battery can supply power for propulsion (SOC = FREE or SOC = 1) (Figure 7). In this way, the global efficiency would be significantly enhanced (Figure 6a), since the demanded output power would be provided by the electric unit (Figure 6b), which is more efficient than the ICE.
Nevertheless, a more robust control strategy should regulate the battery power according to the instantaneous SOC, so as to ensure sufficient range. Indeed, for speeds higher than 100 km/h, the optimal powertrain operation would be achieved at the expense of the driving range. Therefore, over 100 km/h would be even more advisable to limit the power supplied by the battery and increase that provided by the ICE, even though this would reduce the global efficiency.
On the other hand, if the battery is completely discharged (SOC = 0) or a maintainingcharge driving is desired (SOC = CONSTANT), the demanded output power should be provided mainly by the engine (Figure 7b). In this case, the battery charging would be recommended between 50 and 145 km/h, while over 145 km/h keeping the SOC constant would result in greater efficiency. Nonetheless, since the ICE maximum power is 75 kW [37], the vehicle speed cannot exceed 190 km/h with SOC = 0 or SOC = CONSTANT. Figures 8-10 show the optimal exploitation of the electric machines in power-split operation. They suggest using both MG I and MG O as generators for battery recharging up to 50 km/h. It should be the same from 50 to 130 km/h if SOC = 0, while MG I should be used as a motor and MG O as a generator if the battery can provide power (SOC = FREE or SOC = 1). In the latter case, the mechanical energy converted to electric energy by MG O is reconverted to the mechanical form by MG I. Over 155 km/h, both MG I and MG O should be used as motors for SOC = FREE or SOC = 1.
To assess the optimal mode selection, it is sufficient to analyze the optimal overall transmission ratio of Figure 11. Figure 11 shows that for lower speeds the input-split mode should be preferred, while the compound-split mode is advisable at higher speeds. Moreover, it is worth noting that for SOC = 0 and SOC = CONSTANT (i.e., if the battery cannot provide power for propulsion) the optimal overall speed ratio at medium-high speed is the one that realizes the PGs synchronism, where the mechanical power losses are minimized (Figure 2).
Results in FEV Operations
The array used in input to the script was T o = −280 : 1 : 280 Nm. The procedure described in Section 3.1 was carried out for each vehicle speed, whereby the respective optimal operating point was selected after excluding those outside the boundaries of Figure 5 and Table 5. The results of Figures 12 and 13 are related to the best powertrain operating points resulting in the highest global efficiency for each vehicle speed. Figure 12a shows that the global efficiency in FEV operation is averagely higher than the one achievable in power-split operation, thus the FEV driving is specifically suggested for low-medium speed. As stated in Section 4.1, exploiting the only battery power for propelling the vehicle in steady-state driving at higher speeds would imply a reduced range, thus it should be avoided. Over 145 km/h the FEV driving cannot be realized because the required rotational speed of MG O would overcome its maximum value. Figure 13 shows that the optimal FEV operation in steady-state driving involves the exploitation of MG I except for two limited vehicle ranges from 5 to 15 km/h and from 50 to 60 km/h. From 135 to 145 it is advisable to operate with both electric machines acting as motors.
Conclusions
This article is an advancement of the previous contributions [33][34][35][36] addressing a unified parametric model for the analysis of PS-CVT. Herein it was applied to the Chevrolet Volt by introducing the propulsors efficiency maps to move from a dimensionless approach to a dimensional one. The final aim was to provide a complete and general tool to comprehensively analyze any power-split transmission, even multi-PG and multi-mode as Voltec is, by assessing also the mechanical power losses which are often neglected owing to the difficulty of their calculation. The FEV operation can also be swiftly analyzed.
As described in Sections 3 and 4, this method enables the investigation of the powertrain response for given vehicle speed and demanded torque. All the viable operating points are available as a set of operating maps containing the propulsors functioning points, their efficiency, the mechanical power losses in the transmission, the battery power, and the global efficiency. These data can be exploited for the development of the desired energy management strategy.
As an example, far from presenting it as an exhaustive control strategy, this paper proposes a procedure for selecting the optimal operating points which maximize the powertrain global efficiency in steady-state power-split and FEV driving. The results showed the importance of properly handling the batteries SOC to ensure the possibility of exploiting them as an energy exchanger, as well as the benefit of PGs synchronism to reduce the friction losses. Nonetheless, it seems that the MG O appears to be under-exploited in steady-state driving (Figure 9). This can be due to the fact that the surplus power of MG O could be used to provide an acceleration boost, thus it might be exploited more in dynamic operation, which was not analyzed here.
Therefore, future research could be aimed at extending the model in this very direction. Indeed, its basic relationships between PSU speed, torque, and power ratios proposed in [33,35,36] would be still valid in dynamic operation, being based on the principle of power conservation. However, the correspondence between torques developed at the PSU main ports and those provided by the propulsors would be missing because of inertial effects caused by acceleration or deceleration of the vehicle and the propulsors themselves. The extension of the model to the dynamic condition would enable a more in-depth simulation of the powertrain operation which could instantaneously consider the actual battery SOC or the power converters' efficiency. After collecting the results of this sort of simulation, they can be used to develop the most robust energy management strategies available in the literature to optimize the desired objective function, which can be even the minimization of the mechanical power losses.
Conflicts of Interest:
The authors declare no conflict of interest.
Nomenclature k x
Fixed-gear ratio on the x-th branch P j or P j Ideal or real power in the j-th branch p j or p j Dimensionless ideal or real power in the j-th branch as a fraction of the input power p j or p j Dimensionless ideal or real power in the j-th branch as a fraction of the output power P Loss Mechanical power losses p L Dimensionless mechanical power losses as a fraction of the input power p L Dimensionless mechanical power losses as a fraction of the output power T j Torque applied to the j-th shaft η Opposite of the overall power ratio (η = −P out /P in ) η 0 Fixed-carrier efficiency of a PG
Appendix A
The basic theory of the unified parametric model addressed in this paper considers the PSU made up of one or more three-port mechanisms (TPMs), which consists of one active PG and up to three OGs ( Figure A1). X, Y, Z indicate a general way to refer to the PG branches (ring gear, sun gear, and carrier); x, y, z indicate a general way to refer to the PSU main port (in, out, i, o). Figure A1. Scheme of a three-port mechanism with one PG (illustrated by a rounded-corner square) and three OGs (rhombi) realizing the k j fixed-ratio. Red arrows indicate the positive direction of power flows.
The nomenclature adopted in this model distinguishes between torques and powers assessed by considering the PSU ideal or real. In the latter case, torques and powers symbols are overlined. Nonetheless, this distinction affects only the dependent variables, which are supposed to counterbalance the PSU mechanical power losses. Moreover, capital letters indicate dimensional power flows, while the dimensionless ones are indicated by lowercase letters without or with an apex, depending on whether they are normalized to the input power (P in ) or to the opposite of the output power (−P out ), respectively. The former normalization is exploited to address power-split operation, while the latter is used in FEV analysis.
The total mechanical power losses (p L in FEV operation) are the sum of the losses occurring in OGs and PGs, which can be derived as follows: In Equation (A1) η X/x is the OG efficiency, while in Equations (A1) and (A2) p x = −P x /P out is the normalized power ratio of the x-th shaft. In Equation (A2) η Z is the efficiency of the PG evaluated when its branch Z is still, while ψ Z X/Y is the speed ratio between branches X and Y when Z is still. Therefore, both η Z and ψ Z X/Y are constant parameters depending on the PG Willis' ratio Ψ and its basic fixed-carrier efficiency η 0 [36]. φ z x/y is the characteristic function, crucial tool for both analysis and design purpose [33,36], which is a function of τ ruled by nodal ratios: It is worth noting that in FEV operation τ → ∞ ; this can be modeled in numeric software by considering a value of τ high enough (e.g., 10 5 ). Moreover, the characteristic functions rule the ideal power ratio between two TPM shafts:
|
2021-09-25T16:03:35.661Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "79ada3fcb16fb4ccaa467f9e40b578bc6701e452",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/17/7779/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "99956d5d4d97b9284bbe51bde937e9c7bf712248",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
246083087
|
pes2o/s2orc
|
v3-fos-license
|
Critical edge behavior in the perturbed Laguerre ensemble and the Painleve V transcendent
In this paper, we consider the perturbed Laguerre unitary ensemble described by the weight function of $$w(x,t)=(x+t)^{\lambda}x^{\alpha}e^{-x}$$ with $ x\geq 0,\ t>0,\ \alpha>0,\ \alpha+\lambda+1>0.$ The Deift-Zhou nonlinear steepest descent approach is used to analyze the limit of the eigenvalue correlation kernel. It was found that under the double scaling $s=4nt,$ $n\to \infty,$ $t\to 0 $ such that $s$ is positive and finite, at the hard edge, the limiting kernel can be described by the $\varphi$-function related to a third-order nonlinear differential equation, which is equivalent to a particular Painlev\'e V (shorted as P$_{\rm V}$) transcendent via a simple transformation. Moreover, this P$_{\rm V}$ transcendent is equivalent to a general Painlev\'e P$_{\rm III}$ transcendent. For large $s,$ the P$_{\rm V}$ kernel reduces to the Bessel kernel $\mathbf{J}_{\alpha+\lambda}.$ For small $s,$ the P$_{\rm V}$ kernel reduces to another Bessel kernel $\mathbf{J}_\alpha.$ At the soft edge, the limiting kernel is the Airy kernel as the classical Laguerre weight.
Introduction and statement of results
In random matrix theory, the unitary random matrix ensemble on the space of n×n positive definite Hermitian matrices, described by the following measure, where the normalization constant Z n ensures the above measure is a probability measure, and w(x) is the weight function on L ⊂ R.
The correlation kernel in the following as an important object is investigated in random matrix theory [17,21,27,36], K n (x, y) = (w(x)) 1 2 (w(y)) and π n (x) is the monic orthogonal polynomials associated with the weight w(x) on L.
It is interesting to study the local eigenvalue behavior by characterizing the kernel in the large n limit with a suitable scale. For instance, in the case of Laguerre Unitary Ensemble (LUE), see [6,27,37], the limiting density of eigenvalue for a fixed x, is known as a type of Marčenko-Pastur law [35] as follows µ(x) = lim n→∞ 4K n (4nx, 4nx) = 2 π 1 − x x , 0 < x < 1.
(1. 3) In previous works [26,43,44], at the hard edge of the eigenvalue density, the limiting kernel is known as the Bessel Kernel and at the soft edge of the eigenvalue density, the limiting kernel is known as the Airy Kernel A(x, y) := Ai(x)Ai ′ (y) − Ai ′ (x)Ai(y) x − y .
Since the Deift-Zhou nonlinear steepest descent approach applied in random matrix theory, some new properties has been identified in random matrix theory. For example, the limiting kernels in the bulk of the spectrum is usually described by the sine kernel, which is well-known as universality phenomenon (see [17,18,20,32,45]). Using this powerful method, it has been found that some limiting kernels are related to Painlevé equations. In a particular double scaling scheme, the limiting kernel involves Painlevé I equation in [14].
An appropriate double scaling limit of the kernel relates to the Painlevé II equation, see [4,12,13]. A limiting kernel relates to the Painlevé II equation and 4 × 4 RH problem discussed in the Hermitian two matrix model, see [15]. An α -generalized Airy kernel expressed by a solution of a Painlevé XXXIV equation in [29]. A Painlevé III equation involves the double scaling limit of the kernel, and this P III kernel translates to Bessel kernel and Airy kernel in certain conditions [46]. The university at the hard edge is studied in [32,34]. Under certain double scaling scheme, the limiting kernel can be expressed in terms of a solution of Painlevé V function and this P V kernel degenerates to Bessel kernels with different conditions, see [47]. A limiting kernel in terms of the hypergeometric functions investigated in [16]. A type of general Bessel kernel derives from a scaling limit of kernel at the hard edge and its explicit integration formula is given in [33].
(1. 4) When t = 0, or λ = 0, the weight degenerates to the classical Laguerre weight. A more general weight of this type arises from Multiple-Input-Multiple-Output (MIMO) wireless communication system. It is interesting that the free parameter λ "generates" the Shannon capacity, and the moment generating function (MGF) can be expressed by the ration of Hankel determinants [9] for fine n, where the Hankel determinant involves a particular Painlevé V equation. The double scaling limit of the Hankel determinant described by another particular P V equation which is equivalent to a particular P III , see [10]. For more information related to wireless communication, we refer [2,7] and references therein. Moreover, a special case of (1.4) as w(x, t) = (x + t) a x 2 e −x , x > 0, a > −1 appeared in the study of the smallest eigenvalue at the hard edge of the Laguerre unitary ensemble, see [25].
The weight (1.4) can be seen as x λ+α e λt x −x . Up to an essential singular point, this is the singularly perturbed Laguerre weight considered in [8], and they obtained a connection with P III for finite n . Heuristically, this maybe seen as follows (note s = 4nt ), x −x , n → ∞.
By the Riemann-Hilbert approach, the double scaling limit of the kernel associated with this singular weight presents as a P III kernel, see [46], with physical background provided in [38] and for a type of singularly perturbed Gaussion weight relevant study (for Hermite) see [5].
It is interesting to investigate the double scaling limit of the kernel related with the weight (1.4) on (0, ∞). We adapt Deift-Zhou nonlinear steepest descent method to investigate the limiting behavior of the kernel in this paper.
For preparation, we recall that a Lax pair in the following is given by Kapaev and Hubert [31] and note that this Lax pair and its Painlevé V equation do not contain in [24], with a change of variables λ and x in [31] as ξ and s, respectively, which will help us to derive P V transcendent directly in our situation.
Proposition 1. (Kapaev and Hubert [31]) The Lax pair for Ψ is given by where a, b, c, p, q, and r are dependent on s and ρ, µ and υ are constant. The compatibility condition for the system (1.5) and (1.6) is equivalent to the system of equations where the unknown function a(s) satisfies the differential equation, and the function y(s) satisfies the special P V (2µ 2 , −2υ 2 , 2ρ, 0) as follows (1.7) Moreover, the above P V equation is equivalent to the general P III equation [28].
In our case, the following Lax pair for Φ(ξ, s) and an auxiliary function r(s) is a solution of a third-order nonlinear differential equation associated with a particular Painlevé V transcendent.
Remark 1. It's easy to check that the Lax pair (1.5), (1.6) for Ψ matches the Lax pair (1.8), (1.9) for Φ , in the sense of the following linear transformation From the above equation, we can identify that µ 2 = α 2 4 , υ 2 = λ 2 4 , and ρ = 1 4 , and especially the unknown function a(s) = 1 4 + 1 2 r(s) − q ′ (s). Then, we can identify all other quantities in the Lax pair can be expressed in terms of the auxiliary function r(s) and its derivatives.
Although the P V transcendent in (1.7) is equivalent to the general P III transcendent [28], but there is no algebraic gauge transformation from the Lax pair for P V equation to the Lax pair for P III equation which has only two irregular singular points of zero and infinity. Proposition 3. If r(s) satisfies the Lax pair (1.8) and (1.9) for Φ(ξ, s), then it also satisfies a third-order nonlinear differential equation, where α > 0, α + λ + 1 > 0, and r(s) satisfies the following boundary conditions, (1. 16) If α + λ > 0 and s → 0, then . (1.17) Proof. With the aid of the equations (1.12) and (1.14) which derive form the compatibility condition of the Lax pair (1.8) and (1.9) for Φ(ξ, s), one takes derivative of (1.14) and a combination of (1.12), then one obtains the third-order nonlinear differential equation ( where c 1 is an integration constant. From the boundary condition (4.114), it follows that c 1 = 0 in this situation. By the equivalence of q(s) in (1.14) and (1.18), then one finds r(s) also satisfies another second-order nonlinear differential equation, it can also be rewritten as follows Then the third-order equation (1.15) is the sum of −8r ′ (s) times of the equation (1. 19) and 2r ′ (s) + 1 multiples the equation (1.21).
Proposition 4.
Let , then y(s) satisfies the following P V where α > 0, α + λ + 1 > 0, and y(s) satisfies the boundary conditions (1.24) Proof. Inserting (1.22) into the third-order nonlinear differential equation (1.15), one finds the P V transcendent in (1.23). A combination of ( 1.22 ) and ( 1.16 ) yields the initial data (1.24) for y(s). The above P V equation is also derived by the analysis of the single-user MIMO system [10], but it is not from the Riemann-Hilbert approach.
Main results
We denote the kernel (1.1) associated with the weight (1.4) as K n (x, y; t) in the rest of this paper. By two scaling steps, K n (x, y; t) scaled as 4nK n (4nx, 4ny; t), and the "coordinates" x and y are re-scaled as x = u 16n 2 , y = v 16n 2 . After that, the limiting kernel is described by the ϕ− functions which involve the above P V equation (1.23), also known as P V kernel.
The P V kernel (1.25) degenerates to the Bessel kernel J α as s → ∞.
The limiting behavior of the kernel at the soft edge. Firstly, K n (x, y; t) scaled as 4nK n (4nx, 4ny; t). Secondly, x and y scaled as x = 1 + (2n) − 2 3 u, y = 1 + (2n) − 2 3 v. After that the limiting kernel reads as the following Airy kernel.
The remainder of this paper is organized as follows. In Sect.2, we propose a model Riemann-Hilbert problem associated to Φ(ξ, s), and derive its Lax pair by proving the We claim that the following Pauli matrices σ 1 , σ 3 and two auxiliary matrices σ − , σ + have been used in this paper,
A model RH problem and its solvability
A model RH problem in the following for Φ(ξ, s) and it will benefit for the steepest descent analysis.
Both rational matrix functions Φ s Φ −1 and Φ ξ Φ −1 are analytic in ξ plane, with two possible simple poles 0 and s, since all the jumps in (2.31) of the RH problem for Φ(ξ, s) are constant matrices.
The rational function Φ ξ Φ −1 has a removable singularity at ξ = ∞, and two possible simple poles at ξ = 0, ξ = s, respectively, by the asymptotic behaviors (2.32), (2.33) and The asymptotic behavior of the LHS of (2.36) at ξ = ∞ derives from (2.32) and expanding the RHS of (2.36) at ξ = ∞. Comparing the constant terms and the coefficients of ξ −1 on both sides, then one finds, A calculation of the LHS of (2.36) with (2.33) and comparing the coefficients of ξ −1 on both sides as ξ → 0, one has, Similarly, a calculation of the LHS of (2.36) with (2.34) and comparing the coefficients of (ξ − s) −1 on both sides as ξ → s, one finds, Moreover, the rational function Φ s Φ −1 has a removable singularity at ξ = ∞, and a simple pole at ξ = s. From the asymptotic behaviors (2.32) and (2. Similarly, substituting (2.34) into the rational function Φ ξ Φ −1 , and comparing the coefficient of (ξ − s) −1 on both sides of (2.40) as ξ → s , then one finds, From the mixed derivatives Φ ξs = Φ sξ , which leads to the compatibility condition After some calculation and simplify, the compatibility condition (2.42) leads to (see (2.38)) and det(B 2 ) = − λ 2 4 (see (2.41)) in terms of the auxiliary functions q(s), r(s), t(s) and their derivatives, after some calculations, one gets, .47), one finds that q ′ (s), t ′ (s) and q(s) in terms of r(s) and its derivatives, these are explicit in (1.12), (1.13) and (1.14).
Solvability of the Riemann-Hilbert problem for Φ(ξ, s)
In general case, the solvability of a RH problem derives from the triviality of its homogeneous RH problem, namely the vanishing lemma. With the aid of the Cauchy operator, a RH problem turns out to be a Fredholm singular equation, for examples [17,20,23,24,29].
The vanishing lemma follows that the null sapce is trival, and with the Fredholm alternative theorem, imply that the index of the Fredholm singular equation is zero and the RH problem is solvable. A summary contained in [24] and references therein. Some examples and details see also [11,30,47]. Here, a vanishing lemma proved for the homogeneous RH problem Φ(ξ, s).
Then Φ 1 (ξ, s) fulfills the following RH conditions: In order to translate the oscillation terms in diagonal to off-diagonal, another transformation applied in the following, (2.55) (2.56) (c) The behavior of Φ 2 (ξ, s) at infinity is the same as (2.53).
Deift-Zhou steepest descent analysis
With the well-known relation presented in [22], the orthogonal polynomials with respect to the weight (1.4) can be characterized by a RH problem for Y. We adopt the powerful Deift-Zhou steepest descent analysis method (or Riemann-Hilbert method) in [19], see also [4,18,20], to analyze the RH problem for Y. Following the standard process, we obtain a series of invertible transformations Y → T → S → R, at last, the matrix function R is close to the identity matrix. After that, taking a list of inverse transformations, then the kernel (1.1) associated with the weight (1.4) can be represented by the uniform asymptotics of the orthogonal polynomials in the complex plane for large n.
Riemann-Hilbert problem for Y
The orthogonal polynomials with respect to the weight function w(x, t) in (1.4) are described by the following 2 × 2 matrix valued function Y (z).
(b) Y (z) satisfies the jump condition where w(x, t) is given by (1.4).
(c) The asymptotic behavior of Y (z) at infinity is (d) The asymptotic behavior of Y (z) at the origin is By the work of Fokas, Its and Kitaev [22], the above RH problem for Y (z) has a unique solution, where π n (z) is the monic polynomial, and P n (z) = γ n π n (z) is the orthonormal polynomial with respect to the weight w(x, t), see (1.4).
Riemann-Hilbert problem for T
In order to normalize the matrix function Y (z) at infinity, we rescale the variable and introduce the first transformation Y → T, and T is defined as follows for z ∈ C \ [0, +∞), where ℓ = −2(1 + 2 ln 4) is the Euler-Laguerre constant.
Then T (z) solves the following RH problem, (b) T (z) satisfies the following jump condition, The asymptotic behavior of T (z) at infinity is The asymptotic behavior of T (z) at the origin is For the classical Laguerre weight x α e −4x , x > 0, α > −1, the equilibrium measure is , which is independent of α, see [45,40]. From (3.72), we can construct g(z) with the equilibrium measure of the classical Laguerre weight for large n and achieve the normalization of Y (z) at infinity. Similar method see [46]. We define several auxiliary functions and refer [40], where arg(z − x) ∈ (−π, π), and where arg z ∈ (0, 2π).
(3.72)
In order to remove the oscillation diagonal entries in the above jump matrix for x ∈ (0, 1), the contour can be deformed as an opening lens with the matrix factorization. We define the following piecewise analytic function S(z).
Riemann-Hilbert problem for S
We introduce the second transformation T → S, and S(z) is defined as for z outside the lens shaped region, , for z in the upper lens region, , for z in the lower lens region, where arg z ∈ (−π, π).
Combining the conditions (3.72), (3.64) and (3.65) of the RH problem for T and the definition (3.73) of S, then S satisfies the following RH problem, (3.74) (c) The asymptotic behavior at infinity is (d) The asymptotic behavior at the origin is , in the upper lens region, , in the lower lens region.
The asymptotic behavior at infinity is In the spirit of [17] and [34], the unique solution to the above RH problem can be constructed as follows ∈ (0, 1). The branches in the above are chosen as arg z ∈ (−π, π) and arg(z − 1) ∈ (−π, π).
Riemann-Hilbert problem for R
Here, we introduce the last transformation S → R, and R(z) is defined as follows Hence, R(z) satisfies the following RH problem.
(b) R(z) satisfies the jump conditions
where c is a positive constant, and the error term is uniform for z ∈ Σ R . Then one finds, (3.92) Furthermore, by the method and procedure of norm estimation of Cauchy operator as show in [17,18], one has, where the error term is uniform for z ∈ C.
Proof of Theorem 1
In this subsection, we prove that the Painlevé V Kernel at the hard edge.
Moreover, to give an estimation of f (ξ) with the restrictions of s < ǫ << ε 1 = |ξ| ≤ ε < ε 2 . Furthermore, in order to use the Plemelj-Sokhotski formula, we split the original integration contour Γ as three parts, see Figure 6, the first part is a closed integration path After some calculations, and set ǫ = 2s, one has valid for α + λ + 1 > 0, where we have used the following estimations, Figure 6. The Integration Contours Remark 7. In order to compare the results of the situation of the singularly perturbed Laguerre weight in [46], we take the same integration contour of (5.11) therein as the integration contour Γ in (4.124). They are different, for example the lower bound δ satisfies s/δ = O(1), and takes δ = s in [46], but in our case the lower bound ǫ > s and set ǫ = 2s.
For α + λ ∈ N, then the modified Bessel matrix function has a factorization as follows and G(ξ) is an entire matrix function. One verifies with the following formulas, With f (ξ) in (4.119) replaced by f (ξ), which satisfies ( 4.123 ), then one can define where the integration contour Γ keeps the same as in (4.124).
(e) As z → 1, From the jump condition of (4.140), as s → ∞ and z away from the origin, then the jumps on Σ ′ 1 and Σ ′ 3 tend to identical matrices. So, A( z, s) can be approximated by B( z) which independents of s, and B( z) satisfies the following RH problem.
With a similar argument in section 3.7, one finds, where the error term is uniform for bounded z ∈ (0, ∞) and t ∈ (0, c], c is a finite and positive constant.
Airy kernel at the soft edge
The construction of P (1) (z) in the neighbourhood of z = 1. We seek the local parametrix P (1) (z) in the neighborhood U(1, r) = {z ∈ C : |z − 1| < r} for small r > 0. Moreover, P (1) (z) satisfies the following RH problem.
Taking the following transformation to constant jump matrices.
where E 1 (z) is an invertible analytic matrix function in U(1, r). P (1) (z) fulfills the following RH problem. It is well-known that the Airy function has been successful applied to construct the local parametrix in [17,20], see also [45]. Some more information, see [39]. With the jump condition of P (1) (z) in ( 5.164 ), it is natural to consider the following RH problem for P (1) (z) which can be constructed by the Airy function and its derivatives.
|
2022-01-21T17:32:40.320Z
|
2017-11-13T00:00:00.000
|
{
"year": 2017,
"sha1": "77db7c9692778b922fcac0564bf1eec1ffd68879",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eb38c413b8bfef723791504e1194e43c1484d501",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
53501818
|
pes2o/s2orc
|
v3-fos-license
|
Involvement of O-GlcNAcylation in the Skeletal Muscle Physiology and Physiopathology: Focus on Muscle Metabolism
Skeletal muscle represents around 40% of whole body mass. The principal function of skeletal muscle is the conversion of chemical energy toward mechanic energy to ensure the development of force, provide movement and locomotion, and maintain posture. This crucial energy dependence is maintained by the faculty of the skeletal muscle for being a central place as a “reservoir” of amino acids and carbohydrates in the whole body. A fundamental post-translational modification, named O-GlcNAcylation, depends, inter alia, on these nutrients; it consists to the transfer or the removal of a unique monosaccharide (N-acetyl-D-glucosamine) to a serine or threonine hydroxyl group of nucleocytoplasmic and mitochondrial proteins in a dynamic process by the O-GlcNAc Transferase (OGT) and the O-GlcNAcase (OGA), respectively. O-GlcNAcylation has been shown to be strongly involved in crucial intracellular mechanisms through the modulation of signaling pathways, gene expression, or cytoskeletal functions in various organs and tissues, such as the brain, liver, kidney or pancreas, and linked to the etiology of associated diseases. In recent years, several studies were also focused on the role of O-GlcNAcylation in the physiology and the physiopathology of skeletal muscle. These studies were mostly interested in O-GlcNAcylation during muscle exercise or muscle-wasting conditions. Major findings pointed out a different “O-GlcNAc signature” depending on muscle type metabolism at resting, wasting and exercise conditions, as well as depending on acute or long-term exhausting exercise protocol. First insights showed some differential OGT/OGA expression and/or activity associated with some differential stress cellular responses through Reactive Oxygen Species and/or Heat-Shock Proteins. Robust data displayed that these O-GlcNAc changes could lead to (i) a differential modulation of the carbohydrates metabolism, since the majority of enzymes are known to be O-GlcNAcylated, and to (ii) a differential modulation of the protein synthesis/degradation balance since O-GlcNAcylation regulates some key signaling pathways such as Akt/GSK3β, Akt/mTOR, Myogenin/Atrogin-1, Myogenin/Mef2D, Mrf4 and PGC-1α in the skeletal muscle. Finally, such involvement of O-GlcNAcylation in some metabolic processes of the skeletal muscle might be linked to some associated diseases such as type 2 diabetes or neuromuscular diseases showing a critical increase of the global O-GlcNAcylation level.
Skeletal muscle represents around 40% of whole body mass. The principal function of skeletal muscle is the conversion of chemical energy toward mechanic energy to ensure the development of force, provide movement and locomotion, and maintain posture. This crucial energy dependence is maintained by the faculty of the skeletal muscle for being a central place as a "reservoir" of amino acids and carbohydrates in the whole body. A fundamental post-translational modification, named O-GlcNAcylation, depends, inter alia, on these nutrients; it consists to the transfer or the removal of a unique monosaccharide (N-acetyl-D-glucosamine) to a serine or threonine hydroxyl group of nucleocytoplasmic and mitochondrial proteins in a dynamic process by the O-GlcNAc Transferase (OGT) and the O-GlcNAcase (OGA), respectively. O-GlcNAcylation has been shown to be strongly involved in crucial intracellular mechanisms through the modulation of signaling pathways, gene expression, or cytoskeletal functions in various organs and tissues, such as the brain, liver, kidney or pancreas, and linked to the etiology of associated diseases. In recent years, several studies were also focused on the role of O-GlcNAcylation in the physiology and the physiopathology of skeletal muscle. These studies were mostly interested in O-GlcNAcylation during muscle exercise or muscle-wasting conditions. Major findings pointed out a different "O-GlcNAc signature" depending on muscle type metabolism at resting, wasting and exercise conditions, as well as depending on acute or long-term exhausting exercise protocol. First insights showed some differential OGT/OGA expression and/or activity associated with some differential stress cellular responses through Reactive Oxygen Species and/or Heat-Shock Proteins. Robust data displayed that these O-GlcNAc changes could lead to (i) a differential modulation of the carbohydrates metabolism, since the majority of enzymes are known to be O-GlcNAcylated, and to (ii) a differential modulation of the protein synthesis/degradation balance since O-GlcNAcylation regulates some key signaling pathways such as Akt/GSK3β,
INTRODUCTION
Just over thirty years ago, the O-linked N-acetyl-β-Dglucosaminylation, termed O-GlcNAcylation, was discovered inside the mouse lymphocyte cells by Torres and Hart (1). From this discovery, about 1,400 studies were focused on this field among hundreds of other known post-translational modifications. Nowadays, scientific community shows a growing interest since half of these previous studies was published in the last 5 years, and provides more and more relevant data to better characterize the impact of O-GlcNAcylation on cellular processes. It is ubiquitous from virus to plantae and metazoan, and to date around 4000 O-GlcNAc-modified proteins have been identified (2). O-GlcNAcylation seems to be an important molecular process in biology, especially since ubiquitous OGT and OGA knockout mice experiments revealed that O-GlcNAcylation balance is crucial for embryonic stem cell viability and embryonic development (3,4); recent data also supported the essential role of O-GlcNAcylation in adult life since inducible global knockout of OGT dramatically increased mice mortality (5).
O-GlcNAcylation is an atypical, reversible and dynamic glycosylation. Unlike the N-and O-glycans, the O-GlcNAcylation consists of the transfer of a unique monosaccharide which is not elongated, the N-acetyl-Dglucosamine, on a plethora of nucleocytoplasmic (6) and mitochondrial proteins (7). The O-GlcNAc modification is mediated by a couple of antagonist enzymes; the OGT (uridine diphospho-N-acetylglucosamine: peptide beta-Nacetyl-glucosaminyl-transferase) transfers the monosaccharide from the UDP-GlcNAc donor to a serine or threonine hydroxyl group of a protein through a beta linkage (8), while OGA (beta-N-acetylglucosaminidase) hydrolyses the O-GlcNAc moieties from O-GlcNAcylated proteins (9). Very recent data have shown that the moiety can also be added to proteins intended for the extracellular compartment, through a distinct and structurally unrelated OGT (called EGF-OGT) which works in an OGT-independent manner (10)(11)(12). Besides its reversibility, O-GlcNAcylation is also highly dynamic. Indeed, the GlcNAc moieties can be added and removed several times along the protein lifetime, and the turn-over is shorter than the protein backbone's one. Moreover, this O-GlcNAc dynamic process could reply to many environmental conditions and physiological signals such as nutriment availability, especially from its UDP-GlcNAc donor, the last product of the Hexosamine Biosynthesis Pathway (13,14). Finally, O-GlcNAcylation can also interplay with certain other post-translational modifications such as phosphorylation and ubiquitination [for review, see (15)(16)(17)].
However, recent data showed that O-GlcNAcylation is also involved in different cellular processes of the skeletal muscle, and its potential role in many disorders related to skeletal muscle defects is still undervalued. This present review discusses the involvement of O-GlcNAcylation in skeletal muscle metabolism (in particular glucose metabolism), the impact of exercise on O-GlcNAcylation, and finally, the potential role of this posttranslational modification in skeletal muscle in a context of disease such as type 2 diabetes and neuromuscular disorders.
RELATIONSHIP BETWEEN O-GLCNACYLATION AND METABOLISM IN SKELETAL MUSCLE
In the human body, the skeletal muscle is an essential tissue that converts chemical energy into mechanical energy, i.e., contraction, to generate force and ensure some fundamental functions of the body such as movement production, posture control and thermoregulation (37). The skeletal muscle represents 40% of the total human body weight, contains 50 to 75% of all body proteins and accounts for 30 to 50% of wholebody protein turnover (37). It is a huge reservoir of nutrients (e.g., glycogen and amino acids), a great producer/consumer of energy and accounts for 30% of the resting metabolic rate in adult human. For instance, from basal state to fully state of contraction, the skeletal muscle can 300-fold increase its energy consumption within a few milliseconds (38). Interestingly, O-GlcNAcylation is known to be a cell nutrient sensor and the "O-GlcNAc signature" depends on the biological state of cells (39). Within the unitary contractile apparatus, named sarcomere, and more generally in the overall skeletal muscle cell, diverse proteins have been identified to be O-GlcNAcylated since 2004 (18,(40)(41)(42)(43); the nature of these proteins is diversified including contractile, structural, cytoskeletal, metabolic, chaperones, mitochondrial proteins or proteins involved in signaling pathways. Thus, akin to phosphorylation (37,44,45), O-GlcNAcylation could play a significant role, still undervalued, in the skeletal muscle physiology.
Exercise-Mediated O-Glcnacylation Changes in the Skeletal Muscle
Contraction is the main function of the skeletal muscle to provide force generation and ensure movement production and posture control from baseline to exercise conditions. Skeletal muscle is able to develop several adaptations with exercise by changes in contractile protein isoforms, protein turnover, metabolism, mitochondrial functions, intracellular signaling or transcriptional response [for review, see (46)(47)(48)]. As described, O-GlcNAcylation highly depends on metabolism through the Hexosamine Biosynthesis Pathway and the UDP-GlcNAc donor. Glucose metabolism has a key role in this process since about 2-5% of the glucose enters the HBP, while the remaining glucose goes to glycogen storage or glycolysis (13,49). In addition, a study carried out on skeletal muscle, showed that HBP also depends on fatty acids, glutamate or nucleic acids (50). Among other processes, muscle contraction directly impacts glucose homeostasis. For example, exercise increases skeletal muscle glucose uptake through an insulin-dependent/GLUT4 transport pathway (51,52) or through the activation of CamKII by calcium released upon muscle contraction (53). Thus, we can assume that O-GlcNAcylation could be modulated through muscle-training and induces several effects on muscle functions. However, although many studies reported the importance of exercise in the modulation of cardiac O-GlcNAcylation (54)(55)(56)(57)(58)(59), to date there is a very few studies interesting in the skeletal muscle.
It's long recognized that metabolic flexibility concept occurs in skeletal muscle, meaning for example its ability to increase energy supply and provide sufficient and adequate "fuel" for muscle working (48). In this context, it has been recently shown that exercise could also modulate the global O-GlcNAcylation level in rat skeletal muscle (60)(61)(62). Two different types of exercise training were applied on rat: a single acute exercise run bout to fatigue and an exhausting 6-weeks interval training program, both on treadmill. Consecutively to the acute exercise, the global level of O-GlcNAcylation was not changed in the slow-twitch oxidative soleus nor in the fast-twitch glycolytic EDL as well (62) ( Table 1). In contrast, a long-term training program on rat led to an increase of the global O-GlcNAcylation level in the total extract in soleus and EDL as well, whereas its level was not altered in the myofilament fraction (62). In this study, no alteration of the OGT and OGA expression was observed in both exercise protocols, suggesting a potential regulation at the activity level following the long-term exercise. However, in another study, the same authors mentioned a decreased OGA expression in the total extract of the soleus, and not in the myofilament fraction as well as in both extracts in the EDL after a similar long-term training program on rat, suggesting a differential OGA turnover which might explain, partly at least, changes in the O-GlcNAc rate between EDL and soleus (61). Otherwise, these O-GlcNAc adaptations following skeletal muscle activity seemed to be fully different according to the exercise protocol as well as the skeletal muscle fiber type. This could be first related to the metabolic flexibility and glucose utilization known to be different and differentially regulated depending on exercise protocols and muscle fiber types (46,48). Little is known so far how exercise can modulate the glucose flux in the Hexosamine Biosynthesis Pathway. A study demonstrated that hindlimb skeletal muscle UDP-HexNAc concentration increased after a single swimming protocol in ad libitum-fed but not in fasted rats. In parallel, muscle glycogen content decreased and the GFAT activity was not altered in these conditions (63). Thus, it would be also interesting to determine how the glucose metabolism could operate a distinct modulation of O-GlcNAcylation in skeletal muscles depending on different exercise protocols since (i) the metabolism is different from fast and slow-twitch muscle, (ii) the O-GlcNAcylation is highly regulated through glucose metabolism and (iii) the glucose uptake through GLUT4 is highly modulated in skeletal muscle during exercise (52). These future directions could bring new insights in the involvement of the O-GlcNAcylation in the modulation of the beneficial effects of exercise in skeletal muscle, as a game changer to develop new strategies that counteract some muscular disorders or metabolic disorders such as obesity or diabetes mellitus.
Moreover, the O-GlcNAc adaptations post-exercise could also be directly linked to a differential modulation of the O-GlcNAc pattern and the O-GlcNAc processing enzymes between both muscle fiber types seen at resting conditions. Indeed, the O-GlcNAcylation level is higher in the slow soleus muscle compared with the fast EDL muscle (62,64,65); in parallel, the expression level of OGT, OGA, GFAT1, and GFAT2 is higher in soleus than in EDL (62), as well as the activity of OGT (64) ( Table 1).
Finally, it is well known that the metabolism and stress response between the fast-twitch glycolytic fibers and the slowtwitch oxidative fibers are different (46). Thus, this concept could support the differential modulation of O-GlcNAcylation inducing variable consequences on cellular functions during basal and exercise conditions. During muscle activity, reactive oxygen species (ROS) are produced in skeletal muscle (66) and the O-GlcNAcylation has been shown to be involved in the modulation of oxidative stress through different signaling pathways including KEAP1/NRF2, FOXO, NFκB, and p53 (67)(68)(69). A recent study compared the O-GlcNAc pattern and O-GlcNAc processing enzymes expression between a single Diethyl Maleate (DEM) intraperitoneal injection in order to deplete glutathione, meaning oxidative stress in rat, and a single acute exercise on treadmill (60). Interestingly, in the fast-twitch white gastrocnemius, the global level of O-GlcNAcylation increased after acute exercise as well as glutathione depletion. On the contrary, in the slow-twitch soleus, no significant variation of the global O-GlcNAc level was observed ( Table 1). These data suggest that the differential oxidant/antioxidant balance between slow and fast-twitch muscle could also be at the origin of the differential modulation of the O-GlcNAcylation level observed in the two muscle types. However, this complex interrelationship between cellular redox state and O-GlcNAcylation seems not to be the only mechanism involved in the O-GlcNAc regulation during exercise since the OGT and GFAT mRNA expressions were different between acute exercise and glutathione depletion in both muscles (60).
O-GlcNAcylation Could Mediate the Skeletal Muscle Glucose Metabolism
Recent studies demonstrated that energy metabolism, insulin sensitivity and exercise-induced glucose uptake depends on O-GlcNAcylation (65,70). Indeed, the muscle specific knockout of OGT led to the increase of glucose uptake in skeletal muscle in basal conditions (65) as well as consequently to exercise (70). Thus, the specific inhibition of O-GlcNAcylation in skeletal muscle also led to facilitation of glucose utilization in skeletal muscle, leading to greater exercise-induced glucose disposal, involving AMPK (70). Interestingly, the enhancement of glucose uptake was correlated to an increase of glycolytic enzymes activities, suggesting that mice have greater reliance of carbohydrates for energy production (65).
It is worth to note that almost all enzymes of glycolytic pathway such as phosphofructokinase (PFK), fructose bisphosphate aldolase (FBPA), triose phosphate isomerase (TPI), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), beta-enolase (BE), and pyruvate kinase (PK) are O-GlcNAcylated (40,(71)(72)(73)(74)(75)(76) [and for review, see (77,78)] (Figure 1). Thus, O-GlcNAcylation may regulate the expression and/or activity of glycolytic enzymes and might consequently be involved in the regulation of glucose metabolism in skeletal muscle. To support this important role of O-GlcNAcylation as nutrient-sensor, it was first demonstrated that O-GlcNAcylation is involved in the regulation of phosphofructokinase 1 and pyruvate kinase M2 activity (71,73). Indeed, induction of O-GlcNAcylation at serine 529 of PFK1 inhibited PFK1 oligomerization and activity, and reduced glycolytic flux as well (71). Moreover, knockdown of OGT led to an increased PKM2 activity (73). In this previous study, the resulting decrease of O-GlcNAc PKM2 level was associated to a decreased PKM2 expression, and to a decrease of PKM2 serine phosphorylation (73). Conversely, the increase of PKM2 O-GlcNAcylation by the use of Thiamet-G, a potent OGA inhibitor, led to upregulation of PKM2 expression and a decreased PKM2 activity (73). More recently, pharmacological inhibition of OGA and knockdown of OGT were associated to a respective increase and decrease of GK expression, which is the major regulator of the glucose input into cell and therefore the major regulator of glucose metabolism (79). Thus, O-GlcNAcylation is a key regulator of enzymes of glycolysis; interestingly, two downstream enzymes of PK, lactate dehydrogenase (LD) and pyruvate dehydrogenase (PDH) are also modified by O-GlcNAc moieties (74), suggesting that O-GlcNAcylation may also be an important regulator of the utilization of the glycolysis end-product, i.e., pyruvate, through the anaerobic pathway (lactate dehydrogenase) or the aerobic pathway (TCA cycle; Figure 1). Since almost all enzymes of TCA cycle are described to be O-GlcNAc modified [aconitate hydratase (A), isocitrate dehydrogenase (IDH), ketoglutarate dehydrogenase (KGD), succinyl-CoA ligase (SL), succinate dehydrogenase (SDH) and malate dehydrogenase (MDH), so as several subunits of respiratory chain complexes (40,80,81)], O-GlcNAcylation might play an important role in the ATP production as well (Figure 1). However, to date, neither literature mention a potential O-GlcNAcylation of citrate synthase (CS) and fumarate hydratase (FH). In the same way, the creatine shuttle, permitting the communication between ATP site consumption (i.e., myofibrillar ATPases) and mitochondria (82), could be therefore modulated by O-GlcNAcylation since creatine kinase is itself O-GlcNAcylated (40).
Many data suggest a close association between the myofibrils and the enzymes involved in the metabolism. Indeed, the fructose-bisphosphate aldolase (FBPA), enzyme of glycolysis and neo-glucogenesis, is known to be localized to the Zline of the sarcomere in association with α-actinin within a multiprotein complex termed metabolon (83,84). In the same way, the interaction between phosphoglucoisomerase (PGM), phosphofructokinase (PFK), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), pyruvate kinase (PK) and aldolase (FBPA) also occurs with the thin filament (85,86). These specific interactions between glycolytic enzyme complexes (termed glycolytic metabolon) and the contractile apparatus may ensure a very efficient and dynamic localized production of ATP for myosin ATPase and actomyosin interactions resulting in force development. Indeed, it was recently demonstrated that the global modulation of O-GlcNAcylation level in C2C12 skeletal muscle cells differentiated into myotubes led to the modulation of protein-protein interactions in multiprotein complexes; while this study focused on structural proteins, the proteomic data suggested that the glycolytic metabolon could be modulated by O-GlcNAcylation changes as well (Figure 1) (18). Indeed, several glycolytic enzymes (indicated by blue asterisks on Figure 1) were identified in protein-protein complexes which were modulated after the global O-GlcNAcylation changes, suggesting that the glycolytic metabolon could be potentially modulated consecutively to O-GlcNAcylation variations.
In addition, O-GlcNAcylation is also involved in the modulation of insulin pathway through the modulation of signaling proteins such as IRS-1, PI3K, PDK1, or Akt. O-GlcNAcylation of these upstream components of insulin signaling pathway occurs after the recruitment of OGT to the membrane, leading to the attenuation of insulin sensitivity (87)(88)(89) [for review, see (15,90)]. Interestingly, the glycogen metabolism could be also modulated by O-GlcNAcylation through the regulation of glycogen synthase, O-GlcNAcylation acting as inhibitory mechanism of this enzyme (15,91,92); in addition, the UDP-glucose pyrophosphorylase (PP), which generates UDP-Glc, is also O-GlcNAcylated (75), suggesting that O-GlcNAcylation may be a regulator of glycogen synthesis. Thus, the O-GlcNAcylation, which depends itself of the glucose level through the Hexosamine Biosynthesis Pathway, could act as a nutritional sensor to regulate the glycolytic flow through the modification of glycolytic enzymes, the regulation of protein expression, the modulation of their phosphorylation level and/or the modulation of the metabolon.
Taken together, all these data were gained from different tissues or cell lines, and the precise role of O-GlcNAcylation on the regulation of glucose metabolism in the skeletal muscle remains to be clearly elucidated. In this context, it would also be wise to investigate the exact role of O-GlcNAcylation, not only in the regulation of enzymes expression and/or activities, but also in the modulation of these metabolon since OGT and OGA are also enriched in the Z-line and the I-band of the sarcomere (93). All together, these data strongly argue in favor of a key role of the O-GlcNAcylation process in the regulation of energy metabolism of skeletal muscle, in particular the utilization of glucose as "fuel" to provide energy to ensure muscle contraction.
O-GLCNACYLATION AND SKELETAL MUSCLE DYSFUNCTIONS O-GlcNAcylation Is Associated to Muscular Atrophy
Muscle atrophy arises from a defect of the balance between protein synthesis and degradation (94,95). Both intracellular mechanisms maintain protein homeostasis and could be potentially modulated by O-GlcNAcylation (17,96,97), but the role of O-GlcNAcylation in the regulation of protein synthesis and degradation is not really investigated in the skeletal muscle. However, this knowledge could be crucial since muscle atrophy is often associated with impairment of contractile and structural functions, metabolism process, and changes of phenotype fiber.
O-GlcNAcylation and skeletal muscle atrophy were firstly associated subsequently to a 14-or 28-days hindlimb unloading (HU) experiments in rat (40,64,93). One of the most relevant data was an opposite modulation of the global O-GlcNAc level between the slow-twitch soleus and the fast-twitch EDL (64). A decrease of the global O-GlcNAcylation level was observed in the atrophied rat soleus, while in contrast it was not altered in the non-atrophied rat EDL subsequently to the 14and 28-days hindlimb unloading. Moreover, OGT activity was also opposite in both muscles by decreasing in soleus and increasing in EDL; however, it has been shown that OGA activity increased in both muscles (64). This first report suggested that O-GlcNAcylation could be related to the muscle atrophy and plasticity processes. Interestingly, the authors demonstrated in parallel that heat-shock proteins expression was also altered. Indeed, in the rat EDL, the HSP70 expression increased, contrary to the atrophied rat soleus after a 14-days HU (64). This heat-shock protein is known to be O-GlcNAcylated, to have lectin properties (98)(99)(100), and to have the ability to increase stress tolerance while decreasing protein degradation (101,102). Thus, in rat EDL, the increase of global O-GlcNAc level, as well as HSP70 expression, could contribute to an improvement of stress tolerance, as suggested in cardiomyocytes (103), and prevent muscle atrophy (64,104). Interestingly, the expression of another heat-shock protein, the alphaB-crystallin which is also known to be O-GlcNAcylated (105), was decreased in the atrophied soleus and could therefore be involved in the plasticity processes (106).
More recently, different studies focused on the molecular pathways involved in skeletal muscle atrophy, including the relationship between O-GlcNAcylation, signaling pathways and muscular atrophy in skeletal muscle cells model (Figure 2). By using Thiamet-G, a potent inhibitor of the O-GlcNAcase, the C2C12 cells showed both a global increase of the O-GlcNAcylation and a modulation of some catabolic and anabolic pathways leading to atrophy (107). First, this study reported a significant decrease of Akt and GSK3β phosphorylation, as well as an increase of myostatin expression, which could lead to an inhibition of some anabolic pathways. Secondly, an increase of Atrogin-1 expression was reported and could lead to an improvement of some catabolic pathways (Figure 2). In this way, myostatin is a negative modulator of skeletal muscle growth and inhibits some protein synthesis pathways such as Akt/mTOR; it also promotes degradation of many sarcomeric proteins and seems to be dependent of Atrogin-1 (108). Interestingly, this report of molecular events following OGA deficiency in C2C12 could partly explain muscle wasting induction through glucocorticoid in stress conditions (109). Indeed, in C2C12 cells treated with dexamethasone, the OGA activity was decreased such as its expression (a similar molecular event was described when cells were treated with the Thiamet-G); in parallel, Murf-1 expression, leading to atrophy, increased (107) (Figure 2). It was suggested that dexamethasone could repress OGA gene via binding onto Glucocorticoid Response Element (107,110); another mechanism could also involve OGT, known to be a cofactor of glucocorticoid receptors promoting transrepression (111). This is a relevant new insight regarding the use of glucocorticoids as the only available therapeutic treatment to maintain essential muscle functions in some muscular dystrophies (112). However, O-GlcNAc-mediated molecular mechanisms could be partly different between glucocorticoidinduced atrophy and disuse atrophy since the O-GlcNAc pattern is different, and the expression of O-GlcNAc processing enzymes is not changed (64).
In parallel, in C2C12 cells, the increase of the global O-GlcNAcylation level by the use of Thiamet-G or another OGA inhibitor such as PUGNAc or another strategy such as OGA knockdown, seemed suppress the myogenic differentiation of the muscle cells (113)(114)(115). Indeed, the terminal differentiation stage of C2C12 is altered through a decrease of mrf4, myogenin (113), and myoD expression (115) (Figure 2). Interestingly, a decreased O-GlcNAcylation of Mef2D, a transcriptional activator of myogenin, suppressed its recruitment to the myogenic factor promoter (114). Moreover, in the case of a global decrease of the O-GlcNAcylation level, the specific O-GlcNAc rate of PGC-1α led to its degradation and suppressed the mitochondrial biogenesis and myogenesis in C2C12 (115) (Figure 2). By the way, through OGA manipulation in C2C12 cells, O-GlcNAcylation seemed to be a negative regulator of the myogenesis. This conclusion is reinforced by the overexpression of an inactive OGA variant and the increase of O-GlcNAcylation in a rat model inducing skeletal muscle atrophy (116). In contrast, a skeletal muscle specific OGT knockout in mice, leading to a global decrease of O-GlcNAcylation in the tissue, did not induce muscle hypertrophy (65). Indeed, tibialis anterior, EDL and soleus of these mice showed a normal morphology and mass. Interestingly, mice exhibited reduced fat mass (65). Muscle phenotype from global OGT or OGA knockout in mice is difficult to determine since it has been shown a severe perinatal lethality, and to date there is no available data about an inducible OGA knockout or a skeletal muscle specific OGA knockout model. Along O-GlcNAcylation, all these recent studies showed complex pathways, but not fully resolved, leading to skeletal muscle atrophy.
Skeletal Muscle O-GlcNAcylation in Physiopathological Context
In various organs or tissues (e.g., heart, brain, pancreas, and kidney), O-GlcNAcylation was previously described to be both cell protective from an acute variation and deleterious from chronic and sustained variations especially through the impairment of glucose utilization or the glucose toxicity paradigm that may lead to the progression of several diseases [for review, see (31,34,36,68,117,118)]. To date, one of the most defined examples is the involvement of O-GlcNAcylation in the progression of diabetes, characterized by hyperglycemia as the result of body's inability to correctly process blood glucose. Subsequently, all insulin sensitive tissues present hyper-O-GlcNAcylation and many complications. It appeared that a single nucleotide polymorphism in MGEA5, encoding OGA, is associated with type 2 diabetes in a mexican american population (119). Moreover, Goto-Kakizaki rats, which develop type 2 diabetes mellitus in early life, express an inactive 90 kDa isoform of OGA (120). Finally, a conditional OGA knockout in mice led to a low blood glucose concentration, a decreased insulin sensitivity, and to a perinatal death (121). However, the involvement of O-GlcNAcylation in the onset or in the progression of diabetes is still in debate (118) since a pharmacological inhibition of OGA in adipocytes did not cause insulin resistance or disruption of the glucohomeostasis (122).
Interestingly, among insulin sensitive tissues, the skeletal muscles are responsible for about 75% of insulin-stimulated uptake in the whole human body. In skeletal muscle, a global increase of O-GlcNAcylation induced insulin resistance (123), whereas insulin infusion led to an increase of the HBP flux and the O-GlcNAc content (124). It has been shown that muscular overexpression of GFAT in transgenic mice led to muscle insulin resistance (125,126). In a same way, upregulation of GFAT expression and activity was described in skeletal muscle of diabetic patients (127). Moreover, overexpression of GFAT in mice is associated to a decrease of GLUT4 translocation to the sarcolemma (128), whereas transgenic mice overexpressing GLUT4 did not show alteration of GFAT expression or activity unlike the overexpression of GLUT1 (129), although both of these mice models showed an increase of glucose uptake and O-GlcNAcylation in muscle (130). Thus, the role of HBP in insulin resistance seems to be complex and still not resolved (131). Recent data provided that TRIB3 may be a novel link between HBP and insulin resistance in skeletal muscle (132).
Experiments with mice overexpressing OGT displayed muscle insulin resistance as well as hyperleptinemia (133). Pharmacological inhibition of OGA (123) or OGA knockdown (116) in skeletal muscle cells also induced insulin resistance. In parallel, after induction of insulin in liver, PIP3 recruited OGT from the nucleus to the membrane and caused perturbations of insulin signaling (88). Taken together, these data suggest that O-GlcNAcylation could be a link between insulin resistance and muscle impairment, since O-GlcNAcylation is involved in skeletal muscle contractility, sarcomere structuration, myogenesis, and diabetic patients often display "diabetic myopathy" (134). A significant volume of literature suggested O-GlcNAcylation linking diabetes and cardiovascular complications [for review, see (27,35)]. In cardiac muscle from mice developing insulin resistance, mitochondrial dysfunction and changes in contractile properties were associated to an increase of O-GlcNAcylation (58). Indeed, it has been shown a correlation between increase of O-GlcNAcylation and decrease of calcium sensitivity in the cardiac tissue (135), as well as in the skeletal muscle tissue (41,42). Interestingly, it has been shown that adenoviral transfer of OGA (136) or injection of a bacterial homologue of OGA (137), reversed the excessive O-GlcNAc content and the cardiac contractile dysfunctions. In this study, many myofibrillar proteins exhibited changes of their O-GlcNAcylation level, but the modulation of contractile properties could be also explained by a modulation of Ca 2+ handling (138). Indeed, within an insulin resistance context in cardiac tissue, Serca2a expression was changed (58,136), and an alteration of O-GlcNAc levels significantly affected Ca 2+ handling, Serca2a (139) and STM1 functions (140). This role of O-GlcNAcylation should be considered also in skeletal muscle since dysfunction of contractibility as well as the Ca 2+ handling were also measured in skeletal muscle of rat models of diabetes (141), via impaired Serca and GLUT4 (142).
Interestingly, OGT and OGA are highly concentrated to the sarcomere (93). Moreover, in cardiac tissue of STZ-diabetic mice, OGA, and OGT mislocalization in the sarcomere was associated to activities alteration as well as changes in the OGA interactions with actin, tropomyosin and MLC1 (137). In parallel, OGT was also mislocalized in mitochondria, the interaction between OGT and complex IV being decreased while the OGA activity decreased (143). O-GlcNAc processing enzymes distribution in the sarcomere of skeletal muscle as well as other compartments, such as mitochondria should be also investigated to better understand the involvement of O-GlcNAcylation in diabetic context. Indeed, different data support a relocalization of O-GlcNAc in atrophy or exercise muscle activity context (61,62,93). Recently, in C2C12 cells, an OGA knockdown induced insulin resistance and a decrease of the mitochondrial biogenesis with a decreased PGC1α expression (115).
In another context, it has been shown that the global O-GlcNAc level in skeletal muscle is increased in some human neuromuscular diseases (33). Especially, compared to normal muscle fibers, the O-GlcNAc signal seemed to be relocalized from the sarcolemma to the cytoplasm and nuclei in regenerative muscle fibers of muscular dystrophies, myositis and rhabdomyolysis. A strong O-GlcNAc content signal was also displayed in vacuolated fibers in sporadic inclusion body myositis, and distal myopathies with rimmed vacuoles, as well as in neurogenic muscular dystrophy. This O-GlcNAc raise could be associated with the stress response, since HSP70 expression is increased in the cytoplasmic compartment of these neuromuscular diseases. Moreover, a mutation in the GNE gene is known to cause distal myopathies with rimmed vacuoles. Interestingly, UDP-GlcNAc is the substrate of the GNE enzyme, and an impairment of GNE expression/activity could altered the UDP-GlcNAc content and so the O-GlcNAcylation mediated by OGT (33).
CONCLUSION AND PERSPECTIVES
In the past ten years, more and more studies concerned O-GlcNAcylation in the skeletal muscle physiology and physiopathology. This glycosylation is mostly related to glucose metabolism, and skeletal muscle is one of the largest consumers of glucose; in addition, numerous studies showed that skeletal muscle is essential for glucose homeostasis and insulin sensitivity. Moreover, muscle plasticity allows skeletal fibers adaptations to physiological conditions in terms of contractile but also metabolism properties. Consequently, the glucose utilization will change depending on resting, wasting or exercise, as well as the fiber type composition. Interestingly, the global O-GlcNAcylation pattern in skeletal muscle changed, depending on these different conditions and fiber types. O-GlcNAcylation may be both a cause or a consequence of the modulation of the glucose utilization, in a virtuous or deleterious cycle, since most of the enzymes from the carbohydrate metabolism are known to be O-GlcNAcylated. This concept clearly raises the O-GlcNAcylation as a potential key regulator of the skeletal muscle glucose metabolism. O-GlcNAcylation has been also shown to regulate some key signaling pathways, as well cellular stress response, involved in the maintenance of the protein synthesis/degradation balance in the skeletal muscle. From these novel insights, a new paradigm is emerging considering the O-GlcNAcylation as a key factor involved in skeletal muscle physiopathology such as atrophy or insulin resistance, and more generally in neuromuscular diseases. However, O-GlcNAc involvements as a cause or consequence of skeletal muscle impairments are currently in debate. After all, it is now clear that O-GlcNAcylation is getting many involvements in the skeletal muscle physiopathology and would be confirmed in the future by larger studies of interest.
Indeed, with the exponential development of massspectrometry and innovative enrichment techniques, the identification of O-GlcNAc sites which are modulated subsequently to any stimuli/condition/disease will be clearly the challenge of tomorrow. In fact, O-GlcNAcylation regulates protein activity, protein localization, proteinprotein interactions, and can interplay with phosphorylation or ubiquitination. This strategy will lead to a deeper understanding of the precise mechanisms by which O-GlcNAcylation can regulate skeletal muscle metabolism. Secondly, O-GlcNAc processing enzymes behavior should be more precisely investigated, since their localizations and/or interactions, and as consequence, the pattern of O-GlcNAcylation, seemed to be changed through different stimuli in skeletal muscle fibers, especially around the myofilaments. Exercise is one of these stimuli inducing complex O-GlcNAc variations, depending on the muscle phenotype, but also the kind of exercise. Importantly, due to the enhancement of glucose utilization during exercise, the O-GlcNAcylation process in skeletal muscle could be considered as a potential target to alleviate metabolic disorders. Finally, O-GlcNAcylation should be investigated in some precise muscular dystrophies or congenital myopathies, since glucose utilization is often impaired, the sarcomere could be disorganized, the mitochondria biogenesis altered, the nuclei delocalized, or the muscle plasticity changed. It will be worth knowing if O-GlcNAcylation could contribute or alleviate neuromuscular disorders or being considered as a marker of these diseases.
|
2018-10-16T09:00:53.170Z
|
2018-10-16T00:00:00.000
|
{
"year": 2018,
"sha1": "a417fff6e346b623d7cb87dca8b0a2d5eb6f68a9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2018.00578/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a417fff6e346b623d7cb87dca8b0a2d5eb6f68a9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
256107827
|
pes2o/s2orc
|
v3-fos-license
|
Effects of electroacupuncture on urinary metabolome and microbiota in presenilin1/2 conditional double knockout mice
Aim The treatment of Alzheimer’s disease (AD) is still a worldwide problem due to the unclear pathogenesis and lack of effective therapeutic targets. In recent years, metabolomic and gut microbiome changes in patients with AD have received increasing attention, and the microbiome–gut–brain (MGB) axis has been proposed as a new hypothesis for its etiology. Considering that electroacupuncture (EA) efficiently moderates cognitive deficits in AD and its mechanisms remain poorly understood, especially regarding its effects on the gut microbiota, we performed urinary metabolomic and microbial community profiling on EA-treated AD model mice, presenilin 1/2 conditional double knockout (PS cDKO) mice, to observe the effect of EA treatment on the gut microbiota in AD and find the connection between affected gut microbiota and metabolites. Materials and methods After 30 days of EA treatment, the recognition memory ability of PS cDKO mice was evaluated by the Y maze and the novel object recognition task. Urinary metabolomic profiling was conducted with the untargeted GC-MS method, and 16S rRNA sequence analysis was applied to analyze the microbial community. In addition, the association between differential urinary metabolites and gut microbiota was clarified by Spearman’s correlation coefficient analysis. Key findings In addition to reversed cognitive deficits, the urinary metabolome and gut microbiota of PS cDKO mice were altered as a result of EA treatment. Notably, the increased level of isovalerylglycine and the decreased levels of glycine and threonic acid in the urine of PS cDKO mice were reversed by EA treatment, which is involved in glyoxylate and dicarboxylate metabolism, as well as glycine, serine, and threonine metabolism. In addition to significantly enhancing the diversity and richness of the microbial community, EA treatment significantly increased the abundance of the genus Mucispirillum, while displaying no remarkable effect on the other major altered gut microbiota in PS cDKO mice, norank_f_Muribaculaceae, Lactobacillus, and Lachnospiraceae_NK4A136 group. There was a significant correlation between differential urinary metabolites and differential gut microbiota. Significance Electroacupuncture alleviates cognitive deficits in AD by modulating gut microbiota and metabolites. Mucispirillum might play an important role in the underlying mechanism of EA treatment. Our study provides a reference for future treatment of AD from the MGB axis.
Introduction
Alzheimer's disease (AD) is a neurodegenerative disease, mainly manifested as memory impairment, apraxia, agnosia, impaired spatial ability, computing power, and personality and behavior changes. AD has become the third leading cause of disability and death in the elderly, only after cardiovascular and cerebrovascular diseases and malignancies (Du et al., 2018). There are various hypotheses about its etiology: abnormal deposition of amyloid beta (Aβ) in the extracellular space of neurons, the formation of tau protein tangential fibers in neurons, inflammation, cholinergic neuron damage, oxidative stress, etc. But it is hard to explain the disease entirely with one hypothesis. Over the past decade, it has been widely believed that microbiome changes are closely related to neurodegenerative diseases, of which AD is one of the most representative diseases. The gut microbiota of different AD transgenic mice has been reported to vary with age, implying an association with disease progression Wang et al., 2019). Similarly, the composition and diversity of microbiota in fecal samples from patients with AD also changed compared with healthy subjects (Zhuang et al., 2018). The microbiome-gutbrain (MGB) axis, which has been used to describe direct or indirect relations among the brain, gut, and gut microbiota, is primarily bidirectional crosstalk through three distinct but parallel communication pathways "neuro-immune-endocrine" (Du et al., 2018;Sun M. et al., 2020).
Several recent studies have enriched the evidence of changes in the MGB axis in AD pathogenesis, which may explain various characteristics of AD processes along with changes in the gut microbiota. In APPS WE /PS1 E9 transgenic mice, antibiotic-induced disturbance of gut microbial diversity shows an effect on Aβ plaque deposition and neuro-inflammation (Minter et al., 2016). In addition, Lactobacillus plantarum contributed to reinforce the beneficial effects of memantine treatment in APP/PS1 mice by remodeling the gut microbial composition, inhibiting the synthesis of trimethylamine-N-oxide, a gut microbial metabolite, and reducing cluster protein levels. It is also observed in this research that improved cognitive deterioration, reduced Aβ levels in the hippocampus, and protected neuronal integrity and plasticity of the mice . Furthermore, oral probiotics to modify the gut microbiota has been proven to be beneficial in reducing oxidative stress (Bonfili et al., 2018) and restoring glucose homeostasis in 3xTg-AD mice (Bonfili et al., 2019), and abnormal glucose metabolism is also one of the most important clinical and biochemical characteristics leading to AD (Adlimoghaddam et al., 2019).
Metabolites derived from the gut microbiota play important roles in the MGB axis. For example, gut microbiota-derived short-chain fatty acids, for example, valeric acid, butyric acid, and propionic acid can interfere with Aβ aggregation (Ho et al., 2018). Similarly, metabolites released from abundant bacteria in a healthy gut such as 3-hydroxybenzoic acid and 3-(3 -hydroxyphenyl) propionic acid (Wang et al., 2015) support cognitive function, whereas metabolites released by pro-inflammatory bacteria in AD aggravated the inflammation of the central nervous system (Bostanciklioglu, 2019). Moreover, a growing number of studies have clearly shown various changes in the metabolism of AD, including cerebral glucose metabolism (Cisternas and Inestrosa, 2017;Adlimoghaddam et al., 2019), lipid metabolism (Han et al., 2011;Liao et al., 2017), and the metabolism of several amino acids in the dopamine-norepinephrine pathway (Kaddurah-Daouk et al., 2011). Metabolomic approaches can be available for qualitative and quantitative analyses of metabolic profiles. Compared with other biological fluids, the urine sample can be obtained non-invasively, contains abundant metabolites, and reflects the imbalance of all biochemical pathways in the body (Khamis et al., 2017). Urine metabolomics can detect subtle metabolic differences in specific diseases or therapeutic interventions.
Electroacupuncture (EA), a traditional treatment originating from China, is accepted worldwide now. The efficacy of EA for cognitive deficits in AD has been widely reported in clinical and animal studies (Peng et al., 2017;Cai et al., 2019). Some mechanisms have been documented, such as inhibition of neuroinflammation (Cai et al., 2019), improvement of N-acetylaspartate, glutamate and glucose metabolism Lin et al., 2018), reduction of Aβ deposits Tang et al., 2019), upregulation of the expression of BDNF and promotion of neurogenesis Lin et al., 2016), activation of PPAR-γ (Zhang M. et al., 2017), and attenuation of NOX2-related oxidative stress (Wu et al., 2017). However, its biological basis is still unclear, and the link between its effect on AD and gut microbiota has rarely been reported. As mentioned earlier, the gut microbiota plays a very important role in the progression of AD. Therefore, it is very necessary to clarify the potential role of EA in AD, especially from the gut microbiota and metabolomics.
Presenilin 1/2 conditional double knockout (PS cDKO) mice have been widely accepted as mice with a typical phenotype of AD (Saura et al., 2004;Lee and Aoki, 2012;Zhao et al., 2019). They exhibit age-dependent AD-like symptoms and pathology, such as cognitive deficits and synaptic plasticity impairments from the early stage, obvious neuroinflammation at a mature age, hyperphosphorylated tau, and cortical and hippocampal atrophy in the late stage (Saura et al., 2004;Chen et al., 2008;Zhao et al., 2019). Moreover, our previous studies have demonstrated that they displayed metabolic and microbiotic changes, which were associated with the progression of the disease (Gao et al., 2021b). Considering all these, in this work, we aim to investigate the influence of EA on the metabolome and gut microbiota in PS cDKO mice and further find out the potential Frontiers in Microbiology 02 frontiersin.org mechanism by which EA acts on cognition under gut microbial regulation.
Animals
The generation and genotyping of PS cDKO mice have been described previously (Saura et al., 2004). Mice with the transgene Cre, fPS1/fPS1, and PS2-/-were performed as PS cDKO mice, whereas their littermates, without transgene Cre, fPS1/+, and PS2+/+, or PS2±, assigned to the wild-type (WT) group. All mice were housed in a specific pathogen-free environment since born with food and water freely available. The room with 12-h light/dark cycles was controlled at 23 ± 2 • C. All animal protocols in this study were approved by Animal Experimentation of Shanghai University of Traditional Chinese Medicine (PZSHUTCM191025005) and carried out in accordance with relevant guidelines and regulations. All methods are also in accordance with the ARRIVE guidelines.
Electroacupuncture treatment and sample collection
Five months of PS cDKO mice were randomly assigned into the cDKO group and the cDKO with EA treatment (cDKO + EA) group (n = 6). Mice between groups were sex-matched, with half males and half females. Mice in each group were housed separately to avert the cage effects from microbiome transfer. For the cDKO + EA group, disposable acupuncture needles (0.17 mm × 7 mm, Changchun AIK Medical Device Co., Ltd., Changchun, China) were inserted perpendicularly into the muscle layer at Shenmen (HT7) and Taixi (KI3) on the same side limbs. The Shenmen acupoint is located on the ulnar end of the carpal transverse grain in the forepaw, while Taixi is on the midpoint between the Achilles tendon and the medial malleolus. The bilateral acupoints were used alternately in the treatment. The pulse generator (G6805, Shanghai Medical Instrument High Technology Co., Ltd., Shanghai, China) was connected to deliver electrical current to the needles (continuous wave: 2 Hz, 1 mA, lasted 15 min). EA stimulation was administered every day starting at 8 a.m. and lasted for 30 days. The cDKO group mice and WT group received the same type of fixation for equal time. After that, fecal samples were collected. For the collection of urine, each mouse was separately kept in metabolic cages for 1 day during the fasting state. The whole procedure was also conducted in a specific pathogen-free environment. The fecal and urine samples were snap-frozen in liquid nitrogen, and then stored at −80 • C before further analysis.
Behavioral tests
To observe whether EA has an influence on cognitive deficits in PS cDKO mice, a Y maze and a Novel object recognition task (Gao et al., 2021a) were conducted successively at a 3-day interval. Mice were placed in a sound-proofed behavior room in advance to adapt to the circumstance. The operators were blind to the condition of each mouse for behavioral tests.
Y maze
The Y maze, used to assess spatial recognition memory ability, was conducted as previously described (Gao et al., 2021a). First, one arm, named the novel arm, was blocked. Each mouse was placed in the start arm, facing the central joining region, and allowed to freely explore the opened two arms for 8 min. One hour later, the novel arm was opened and the mouse was replaced in the start arm to freely explore the three arms. The percentage of time mice spent and the number of entries in the novel arm were calculated.
Novel object recognition task
The novel object recognition task including three sessions is also used to evaluate recognition memory ability. During the first training sessions, an open-field chamber was set up with two objects of the same size, shape, color, and material. Each mouse was placed in it to explore for 5 min. At 1 h and 24 h after the training sessions, the mouse was placed in the chamber again, but one of the objects was replaced with a different object in size, color, and shape. The time that each mouse spent exploring each object was recorded. The ratio of time that mice spent exploring either of the same objects (during the training session) or the novel object (the next two sessions) over the total time that mice spent exploring both objects was calculated as the preference index.
Urinary metabolomic signatures
Urine metabolites were performed following an untargeted gas chromatography-time-of-flight mass spectrometer (GC-MS) metabolomics method, as described previously (Ma et al., 2018). In brief, urinary samples were first thawed at room temperature, shaken well, and then centrifuged at 12,000 rpm for 10 min. One hundred microliter of supernatant was taken and mixed with 70 IU urease for 15 min for urea degeneration, and then methanol and myristic acid were added. The supernatant was centrifuged and dried using a nitrogen stream. The prepared methoxide was then combined with the carbonyl group by adding pyridine-dissolved methoxyamine. After that, NO-Bis (trimethylsilyl) trifluoroacetamide acted as a derivatizing reagent to pretreatment. The Agilent 6890/5975B GC/MSD system was used to perform the sample analysis. Each 1 µL analyte was injected into a capillary column (Agilent J&W DB-5ms Ultra Inert 30 m × 250 µm, i.d., 0.25 µm film thickness) with high purity helium as carrier gas at a constant flow rate of 1.0 ml/min. The solvent delay time was set to 5 min. Temperature programing for GC was set at 70 • C for 2 min and followed by a 2.5 • C/min oven temperature ramp to 160 • C, then raised to 240 • C at a rate of 5 • C/min, and maintained at that temperature for 16 min. The temperatures of the injector, the EI iron source, and the interface were set to 280, 230, and 260 • C, respectively. The measurements were collected using electron impact ionization (70 eV) in full scan mode (m/z 50-600).
Microbial community profiling
Total microbial DNA was extracted from fecal contents using the E.Z.N.A. R soil DNA kit (Omega Bio-Tek, Norcross, GA, USA), according to standard protocols. Gel electrophoresis (0.8% agarose gel) was used for DNA extraction, and then an ultraviolet spectrophotometer (Thermo Fisher Scientific, Wilmington, USA) Frontiers in Microbiology 03 frontiersin.org Electroacupuncture ameliorates cognitive deficits in PS cDKO mice. The duration (A) and frequency (B) of entries in the novel arm of the Y maze. The symbol "•" means an individual. (C,D) The preference index in the novel object recognition task. One-way ANOVA, * P < 0.05, * * P < 0.01, N = 6.
was used to evaluate DNA concentration and purity. To amplify the hypervariable regions (V3-V4) of the bacterial 16S ribosomal RNA gene, a set of primers (338F: 5 -ACTCCTACGGGAGGCAGCAG-3 and 806R: 5 -GGACTACHVGGGTWTCTAAT-3 ) was used. Polymerase chain reaction (PCR) amplification products were identified by 2% agarose gel electrophoresis and then purified by the AxyPrep DNA Gel Extraction kit (AXYGEN Biosciences, Union City, CA, USA). The purified amplicons were combined at an equimolar ratio, which was quantified using a microplate reader (BioTek, FLx800, USA). Finally, the paired terminal sequencing was performed on the Illumina MiSeq platform under the standard instructions (Majorbio Biopharm Technology Co., Ltd., Shanghai, PRC).
Data analysis
First, raw data analyzed by GC-MS were converted into NetCDF format by Agilent MSD workstation. Then, XCMS toolkit scripts and R 2.13.2 (Lucent Technology, Reston, VA, USA) packages are used for preprocessing, and subsequently, Simca 14 software (Umetrics, Umea, Sweden) was used for further processing. Raw FASTQ files were demultiplexed and qualified by fastp version 0.20.0 and merged by FLASH version 1.2.7 as we described previously (Gao et al., 2021b). In brief, the 300 bp reads were truncated at any site getting an average quality score of lower than 20 over a 50 bp sliding window. Only overlapping sequences of >10 bp were assembled and the maximum mismatch ratio of the overlap region is 0.2. Besides, distinguish samples based on barcode and primers and only accept two nucleotide mismatch in primer matching. Operational taxonomic units (OTUs) picked at 97% similarity cutoff were clustered using UPARSE version 7.1. The taxonomy of each OTU representative sequence was analyzed by RDP Classifier version 2.2 with a confidence threshold of 0.7. Differences in α-diversity were computed by the Shannon and Chao indices. β-diversity was performed to estimate the difference or similarity of community structure between groups, visualized in principal coordinate analysis (PCoA) plots. The statistical significance was assessed by partial least squares discriminant analysis (PLS-DA). Different bacterial taxa among groups were estimated by the linear discriminant analysis (LDA) effect size (LEfSe).
Statistical analyses were carried out with a one-way ANOVA, the two-tailed Student's t-test, or Pearson's multivariate linear regression analysis by SPSS 25.0. The correlation coefficient of Spearman's between perturbed urinary metabolome and gut taxa was indicated as a heatmap. All numerical data are shown as means ± standard deviation (SD). P < 0.05 was considered statistically significant. The metabolic associations of each well-correlated member of the gut microbe (|r| > 0.4) were considered as a cross-correlation diagram.
Electroacupuncture ameliorates cognitive deficits in PS cDKO mice
PS cDKO mice is a generally accepted AD model, characterized by cognitive deficits, which is accordant with our study. It is well known that mice prefer to explore new things. In the Y maze, compared with WT controls, PS cDKO mice showed significantly reduced duration and frequency in the novel arm, which was blocked at first, indicating reduced spatial recognition memory. However, EA treatment improved the duration and frequency observed in the novel arm (Figures 1A, B). Interestingly, similar cases happened in the novel object recognition task. During the training session, mice did not exhibit different preferences for the two similar objects in Scores plots of multivariate statistical analysis on urinary metabolites. PCA scores plot (A) and PLS-DA scores plot (B) of the WT, cDKO, and cDKO + EA groups. N = 6. groups ( Figure 1C). At 1 h after the training session (for testing short-time memory), WT mice displayed an obvious preference for the novel object, while PS cDKO mice did not spend much time exploring the novel object (Figures 1C, D). Though there was such preference neither in WT nor in PS cDKO mice in subsequent 24 h tests (for long-time memory), those suggested that PS cDKO mice had impaired short-time novel object recognition memory. However, the time spent on exploring the novel object in 1 h tests was reversed by EA treatment.
All those indicated that EA ameliorated short-term memory deficits.
Electroacupuncture alters urinary metabolome in PS cDKO mice
By establishing principal component analysis (PCA) and PLS-DA patterns, we observed overall clustering and trends among groups. As shown in Figure 2A, significant separation was observed among WT, cDKO, and cDKO + EA groups in the PCA score plot (R 2 X = 0.821, Q2 = 0.508). Moreover, in the PLS-DA score plot (Figure 2B), the cDKO group was separated from the WT and cDKO + EA groups (R 2 X = 0.864, R 2 Y = 0.982, Q 2 = 0.959). The result indicated that the model was constructed successfully, and EA treatment appeared to ameliorate urine metabolic alternation induced by cDKO. In the OPLS-DA plot (Supplementary Figures 1A, B), the WT and DKO groups displayed significant deviation (R 2 X = 0.689, R 2 Y = 0.992, Q 2 = 0.918). Supplementary Figure 1C also showed that the cDKO + EA group had distinctive metabolic profiles compared with the cDKO group (R 2 X = 0.639, R 2 Y = 0.956, Q 2 = 0.798).
Screening and identification of differential metabolites using S-plot and VIP (variable importance in projection) in OPLS-DA (VIP > 1), which were further verified by a pairwise t-test (P < 0.05) (Xue et al., 2021). As shown in Figure 3A, a total of 11 differential metabolites were identified in the WT group compared with the cDKO group in which m-cresol and isovalerylglycine decreased in the WT group, and the other nine differential metabolites increased in the WT group. At the same time, we also observed the abundance of those 11 differential metabolites in the cDKO + EA group (Supplementary Figure 1E). Compared with the cDKO group, the cDKO + EA group showed an obvious decrease in isovalerylglycine and increased glycine and threonic acid with the other eight metabolites changed insignificantly (Supplementary Figure 1E and Figure 3B). Moreover, there was a total of seven different metabolites were identified between the cDKO and cDKO + EA groups of which three metabolites were reduced in the cDKO + EA group ( Figure 3B). The details of these metabolites are presented in Supplementary Tables 1, 2. 3.3. Electroacupuncture affects the metabolic pathway and network analysis KEGG and HMDB databases were employed to correlate urine differential metabolites with potentially related pathways, and MetaboAnalyst 3.0 was used to further determine their impact values. It was found that the eight most relevant metabolic pathways were disturbed (impact factor ≥0.1), compared the cDKO group with the WT group. They were glyoxylate and dicarboxylate metabolism; citrate cycle (TCA cycle); alanine, aspartate, and glutamate metabolism; glutathione metabolism; arginine biosynthesis; pentose and glucuronate interconversions; glycine, serine, and threonine metabolism; d-glutamine and d-glutamate metabolism ( Figure 4A). Moreover, two main affected metabolic pathways were observed in the cDKO and cDKO + EA groups. They are glyoxylate and dicarboxylate metabolism and glycine, serine, and threonine metabolism ( Figure 4B). In addition, there were two identical metabolic pathways involved in three groups, suggesting that the two pathways may have an important role in the pathological process of AD and the course of EA treatment.
Electroacupuncture changes the gut microbiome in PS cDKO mice
Our previous study has observed that the gut microbiome has been changed in PS cDKO mice (Gao et al., 2021b), but the effect of EA on it is still unclear. The α-diversity indexes including Shannon and Chao1 were conducted to reflect the community diversity and richness in three groups. The Shannon index shows PS cDKO mice have reduced diversity within the microbial community; however, it was reversed after PS cDKO mice were treated with Heat map of the differential metabolites in the cDKO group and the WT group (A), and the cDKO group and the cDKO + EA group (B), N = 6. EA ( Figure 5A). The same trend happened in the Chao1 index ( Figure 5B), showing EA improving community richness of PS cDKO mice. Furthermore, PcoA and PLS-DA demonstrated the obviously different community structure of the gut microbiome between PS cDKO mice and WT mice, and the community structure changed after PS cDKO mice received EA (Figures 5C, D). Especially, in PcoA, the community structure in the cDKO + EA group was more close to the WT group. Furthermore, we studied the community abundance of the gut microbiome in three groups, which could show the microbiotic community compositions intuitively. The genus microbiota, Lactobacillus, Bacteroides, and Dubosiella were the three most prominent microbiotic communities in three groups, followed by Lachnospiraceae_NK4A136_group and Prevotellaceae_UGG-001, and the abundance of those three microbiomes was increased in PS cDKO mice ( Figure 5E). After EA, the total abundance of those was decreased, while the abundance of Lactobacillus and Bacteroides showed confused fluctuation. In addition, PS cDKO mice displayed declined Lachnospiraceae_NK4A136_group and Prevotellaceae_UGG-001. Their relative abundance was reversed but not very obviously in PS cDKO mice treated by EA. These results suggested that EA upgraded diversity and richness within the microbial community and changed the community structure in PS cDKO mice, with the abundance of part changed main microbiota moderated.
Electroacupuncture moderates the structure of gut taxa in PS cDKO mice
To clarify the exact change in the structure of gut taxa, we conducted LEfSe analysis to compare the gut microbiota in different groups at diverse taxonomic levels. LEfSe analysis revealed PS cDKO mice mainly showed a higher abundance of g_Lactobacillus, f_Lactobacillaceae, o_Lactobacillales, c_Bacilli, f_Peptostreptococcaceae, g_Romboutsia, and g_Ralstonia (LDA score of ≥3) from phylum to genus ( Figure 6A); however, WT mice were largely characterized by higher enrichment of 27 other microbiota.
Furthermore, compared with PS cDKO mice, PS cDKO mice treated with EA were featured by higher enrichment of f_Marinifilaceae, g_Odoribacter, f_Deferribacteraceae, g_Mucispirillum, c_Deferribacteres, p_Deferribacteres, o_Deferribacterales, g_unclassified_f_Ruminococcaceae, f_Clostridiaceae_1, g_Candidatus_Arthromitus, and g_unclassified_f_Atopobiaceae (LDA score of ≥3) (Figure 6B). The hierarchical relationships among the enriched taxa in each group were exhibited on the cladogram clearly (Figures 6C, D). G_Lactobacillus, f_Lactobacillaceae, and o_Lactobacillales, the three most enriched taxa in PS cDKO mice, are all the subsets of c_Bacilli, and g_Lactobacillus are the next hierarchies of f_Lactobacillaceae ( Figure 6C). The relationship of main taxa in PS cDKO mice with EA treatment from g_Mucispirillum to p_Deferribacteres was also well demonstrated ( Figure 6D). In addition, we analyzed the change in the gut microbiota between the groups at the genus level. As shown in Figure 5E, the cDKO group had 11 gut taxa changes when compared with the WT group. Norank_f_Muribaculaceae, Lactobacillus, Lachnospiraceae_NK4A136_gruop, and Mucispirillum were the primary different taxa, obviously declined except Lactobacillus. Moreover, the proportions of five taxa were significantly different in the cDKO + EA group. The proportions of Mucispirillum, unclassified_f_Atopobiaceae, and Ruminiclostridium_5, the dominant taxa, were significantly increased. Taking all these into consideration, we could conclude that EA on PS cDKO mice significantly affects gut taxa and Mucispirillum may be the cue microbiota in the changes.
Discussion
The disorder of the gut microbiome is an important factor to urge the pathogenesis of neurodegenerative disorders, especially AD. An increasing number of studies report that gut microbiomes are involved in multiple features of AD, such as cognitive impairment, hippocampal Aβ plaques, destroyed neuronal integrity, and plasticity and inflammation (Fang et al., 2020;Ma J. et al., 2020;Wang et al., 2020). Although the lack of effective targeted therapeutic treatment in the pathogenesis of AD, EA was considered a promising way to attenuate AD-related symptoms. The current widely accepted understanding is that EA reduces cognition impairment via antineuroinflammation Cai et al. (2019). However, little is known regarding the effect of EA on the gut microbiota. Therefore, we performed the 16S rRNA gene sequencing to explore the effect of EA on cognitive impairment in AD model mice, PS cDKO mice, underlying the regulation of microbiota. The acupoints, HT7 and KI3, were required for memory improvement, coming from the Chinese Traditional Medicine of "ShenMing" therapy with amounts of clinical practices.
Amino acids are important in neurotransmission and receptor function and are related to neurotoxicity, and changes in amino acid metabolism can be an early indicator of neurodegeneration in AD (Fonteh et al., 2007). Neurotransmitters are the important parts of the nervous system, closely related to the learning and memory ability of organisms. Recent studies have indicated alterations in amino acid metabolism in AD (Kaddurah-Daouk et al., 2011;González-Domínguez et al., 2014). Glycine is one of the components for the synthesis of reduced glutathione whose deficiency promotes the pathogenesis of AD (Wu et al., 2004;Peter and Fau-Braidy, 2015).
As an agonist of the N-methyl-d-aspartate glutamate receptor (NMDAR), glutamate is a major excitatory neurotransmitter in the mammalian central nervous system (CNS) (Niciu et al., 2012). It also comes from glutathione mentalism. In the brain, it is used for energy formation and biosynthesis of the inhibitory mediator, γ-aminobutyric acid (GABA) (Patel et al., 2005). Glutamate and its receptors, primarily the ligand-gated ionotropic glutamate receptors (iGluRs), play fundamental roles in synaptic plasticity as well as in the underlying molecular mechanisms of learning and memory (Chang et al., 2020). They mediate most of the excitatory neurotransmissions in the mammalian CNS. Studies have shown that reduced plasma glutamate level in patients with AD is associated with cognitive impairment . Furthermore, another study found that decreased hippocampal glutamate in patients with mild cognitive impairment and AD was associated with episodic memory performance (Wong et al., 2020). L-threonate (threonic acid) is a naturally occurring sugar acid that is excreted in urine by approximately 10% (Sun et al., 2016). It is widely reported that threonic acid has effects on the CNS. For Frontiers in Microbiology 08 frontiersin.org The taxa of gut microbiota affected by electroacupuncture. LEfSe analysis from the phylum to genus level in the WT and cDKO groups (A), the cDKO and cDKO + EA groups (B). Taxa enriched in three groups are indicated by LDA scores (green for the WT group, blue for the cDKO group, and red for the cDKO + EA group). The LDA score threshold is ≥3. (C,D) The cladogram of enriched taxa from the phylum to genus level. Differential abundance analysis of taxa on the genus level in the WT group and the cDKO group (E), the cDKO group, and the cDKO + EA group (F). Student's t-test, * P < 0.05, * * P < 0.01, N = 6.
example, oral administration of L-threonate magnesium salt (L-TAMS) can upregulate NMDAR signaling, prevent the synaptic loss, reverse memory deficits in aged rats, and improve synaptic density and memory in APPswe/PS1dE9 mice (Slutsky et al., 2010;Li W. et al., 2014). Moreover, older adults aged 50-70 years with cognitive impairment who orally took a compound containing L-TAMS for 12 weeks showed restored cognitive function . Importantly, intake of other Mg (2+) anions did not have the same results. In our study, PS cDKO mice demonstrated significantly decreased threonic acid when compared with the WT The relevance between the gut microbiota of genus level and the differential urinary metabolites. (A-C) Spearman's correlation heat map: red indicates a positive correlation, while green indicates a negative correlation. The deeper color means a greater correlation ( * P < 0.05, * * P < 0.01). (B-D) The gut microbiota of the genus level, predicted by metabolic variation (|r| > 0.4), is labeled with a similar value. Lines connecting with metabolites show the direction of the relevance to each genus of microbe with the red (positive) or blue (negative) lines, N = 6.
controls, but EA significantly reversed the level of threonic acid. Therefore, EA may alleviate the symptoms of AD mice by increasing the threonic acid content. Though our study showed that EA reduced the increased concentration of isovalerylglycine in the urine of PS cDKO mice, the connection between isovalerylglycine and AD is not clear. Interestingly, it was reported lower kidney clearance of isovalerylglycine was associated with a long-term decline in cognitive function in people with chronic kidney disease, but its serum concentration did not show that it was related to the decline of cognitive ability (Cunnane et al., 2011). Citric acid, cis-aconitate, and succinic acid are important components of the TCA cycle, which is involved in glucose metabolism. Glucose metabolism plays an important role in patients with AD. It has been reported that neuronal glucose metabolism decreases by 20-25% in patients with AD, limiting cellular metabolic capacity and leading to oxidative stress (Cunnane et al., 2011;Butterfield and Halliwell, 2019). The levels of those three metabolites in AD mice are much lower than that of WT mice, suggesting that the glucose metabolism was reduced in AD mice. Unfortunately, EA did not improve glucose metabolism in cDKO mice. Fructose is also an important energy substance and plays a completely different role in the occurrence and development of AD. There is evidence that threefold to fivefold higher cerebral sorbitol and fructose levels in patients with AD may be related to the production of endogenous Schematic diagram representing the effect of electroacupuncture on urinary metabolome and microbiota in PS cDKO mice and correlation of gut microbiota and urinary metabolome.
fructose (Xu et al., 2016). Excessive activation of brain fructose metabolism will cause mitochondrial oxidative stress and local inflammation, and mitochondrial energy production will be hindered by insufficient glycolysis of neurons, resulting in the gradual loss of brain energy level required by neurons to maintain function and survival (Johnson et al., 2020). Butyrate is a kind of multifunctional molecule that regulates host energy metabolism and immune function by utilization as an energy source using the β-oxidation pathway and as an inhibitor of histone deacetylases (Ferrante et al., 2003;Kim et al., 2009;Govindarajan et al., 2011;Stilling et al., 2016). In addition, butyrate was also reported to reduce gut inflammation by decreasing the activities of the NF-κB and signal transducer and activator of transcription 3 (STAT3) pathways and promoting T-regulatory cell differentiation . In Figure 3B, butyrate levels were significantly increased in the cDKO + EA group compared with the cDKO group, therefore, we reasoned that the improvement of AD-related symptoms by EA might be related to the increase in butyrate. Similarly, a modified Mediterranean-ketogenic diet benefits AD and increases fecal propionate and butyrate in subjects with mild cognitive impairment, and butyrate correlated negatively with Aβ-42 in cerebrospinal fluid (Nagpal et al., 2019). Moreover, in vitro research found that pretreatment with butyrate in the amyloid beta (Aβ)-induced BV2 cells showed suppressed microglial activation, reduced expression of cyclooxygenase-2 (COX-2), and reversed phosphorylation of NF-κB p65 . The finding is consistent with our study, though the purpose focused on is different.
Our previous study demonstrated that PS cDKO mice showed altered microbiota and metabolites (Gao et al., 2021b). Furthermore, in this study, behavioral tests (Y maze and Novel object recognition task) demonstrated ameliorated cognitive deficits in PS cDKO mice after EA treatment (Figure 1). In addition, EA could modulate the imbalance of the gut microbiota in PS cDKO mice, showing increased richness and evenness in the microbiotic community ( Figure 5).
Consistently with our previous study, PS cDKO mice displayed reduced enrichment of norank_f_Muribaculaceae, Lachnospiraceae_NK4A136_group, and Mucispirillum in the research. Importantly, norank_f_Muribaculaceae is positively associated with the formation and barrier function of the inner mucus layer in the gut (Volk et al., 2019) and is predicted to
Reversed metabolites and microbiome by EA on PS cDKO mice
The collection between reversed metabolites/Microbiome and pathological manifestations of AD
Patient/Model References
Increased metabolite Glycine is one of the components for the synthesis of glutathione whose deficiency promotes the pathogenesis of AD Peter and Fau-Braidy (2015) attenuates the inflammatory responses in the jejunum and colon, such as enhanced mRNA levels of TLR4, pro-inflammatory cytokines Herp et al. (2019) produce propionate in feces as a fermentation end product. Furthermore, norank_f_Muribaculaceae and propionate both have been linked with gut health and growing longevity in mice in previous studies (Sibai et al., 2020;Smith et al., 2021). Other studies about sepsis-related liver injury (SLI) demonstrated that Metformin improves liver damage, regulates colon barrier dysfunction, and reduces inflammation in aged SLI rats, accompanied by an increased proportion of Muribaculaceae (Liang et al., 2022). Lachnospiraceae_NK4A136_group, also a type of short-chain fatty acid (SCFA) producing bacterium, was considered to be correlated with enhanced gut barrier function (Ma L. et al., 2020). It was also observed to decrease in diet-induced obese mice and subsequently increased by spermidine (Ma L. et al., 2020). Mucispirillum affiliated with the phylum Deferribacteres is an immune-inducing bacterial group (Herp et al., 2021), but it also promotes health in the immunocompetent host.
Research found that C57BL/6 mice with lipopolysaccharide (LPS)-induced intestinal injury showed an increased abundance of Mucispirillum. However, with the inflammatory responses in the jejunum and the colon, such as enhanced mRNA levels of Toll-like receptor 4 (TLR4), pro-inflammatory cytokines, and chemokines, attenuated after glycine administration, the relative abundance of Mucispirillum increased. Moreover, Mucispirillum schaedleri, a species affiliated with the genus Mucispirillum, was reported to protect mice against Salmonella typhimurium-induced colitis partially by competitive binding anaerobic electron acceptors and was important to intestinal homeostasis (Herp et al., 2019). In addition, transgenic mice that overexpress the tryptophan metabolizing enzyme indoleamine 2, 3-dioxygenase 1 (IDO1) showed twofold thicker intestinal mucus layers than control mice, with increased proportions of Mucispirillum schaedleri (Alvarado et al., 2019). Although it is hard to explain the reduced abundance of Mucispirillum in PS cDKO mice, EA significantly improved the abundance of it, which may interfere with the progress of AD (Figures 6E, F). On the other hand, besides PS cDKO mice, patients with functional constipation were also reported to have a significantly lower abundance of Mucispirillum (Sugitani et al., 2021). Lactobacillus species are attractive hosts because of their GRAS (generally recognized as safe) status (Peiroten and Landete, 2020). Lactobacillus and its derivatives possess effects of anti-biofilm, antioxidant, pathogeninhibition, and immunomodulation activities (Slattery et al., 2019;Chee et al., 2020). A reasonable explanation for the increased Lactobacillus in PS cDKO mice could be that it is a response to immunomodulation activities, and a tauopathy mouse model, P301L mice, showed increased Lactobacillus in fecal samples as well . Differential urinary metabolites were integrated with the gut microbiota at the genus level to point out the relationship between gut microbial and host metabolism, especially in PS cDKO mice treated by EA (Figure 7). As shown in Figure 4, glycine, serine, and threonine metabolism, and glyoxylate and dicarboxylate metabolism were the two main metabolic pathways disturbed by EA. The levels of glycine and threonic acid, which significantly increased after EA treatment, were both positively correlated with Lachnospiraceae_NK4A136_group and Mucispirillum (Figure 7), and the abundance of Mucispirillum was significantly improved as well, though its immunomodulation activities were still thought complex. As mentioned earlier, Lachnospiraceae_NK4A136_group is correlated with enhanced gut barrier function, and intake of L-TAMS upregulates NMDAR signaling, improves synaptic density, and reverses cognitive deficits. Therefore, we may speculate that Mucispirillum increased by EA is beneficial for the recovery of AD, and butyrate, as a key mediator of host-microbe crosstalk, restored intestinal mucosa damage induced by a high-fat diet, increased the expression of zonula occluden-1 in the small intestine, and further decreased the levels of gut endotoxin in serum and liver Tian et al., 2019). In addition, Lactobacillus can produce butyric acid in what seems like a virtuous circle. Anyway, the association with neurological diseases and related biomarkers needs to be further explored.
Conclusion
Based on metabolomics and microbial community analysis, our study showed that EA alleviated cognitive deficits in PS cDKO mice along with altered urinary metabolites and gut microbiota (Figure 8). The EA treatment mainly disturbed two metabolic pathways: Glyoxylate and dicarboxylate metabolism glycine, serine, and threonine metabolism. Not only the decreased community diversity and richness but also the abundance of the main gut microbiome in PS cDKO mice was influenced by EA. Furthermore, differential urinary metabolites and gut microbiota were correlated. Differential urinary metabolites including increased isovalerylglycine and decreased glycine and threonine in PS cDKO mice were reversed by EA, as well as the differential gut microbiota, Mucispirillum. Based on previous studies, glycine benefits attenuating the inflammatory response and is an element for the synthesis of glutathione, the deficiency of which promotes the pathogenesis of AD. At the same time, threonine was proven to interfere with pathological manifestations of AD, such as improving synaptic density, upregulating NMDAR expression, and restoring cognitive function ( Table 1). In our study, Mucispirillum was positively correlated with glycine and threonic acid (Figure 7) and could reduce intestinal inflammation ( Table 1). The bidirectional crosstalk in the MGB axis contains the pathway of the immune system, and neuroinflammation in AD may be relieved with the modulation of systemic immunity. These may be the potential mechanism of EA improving cognitive ability in PS cDKO mice. Unfortunately, we did not measure the expression of these markers involved in the pathogenesis of AD, but it is a good way to study in future.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repositories and accession numbers can be found at: https://www.ncbi.nlm.nih.gov/sra/PRJNA923176, PRJNA923176.
Ethics statement
The animal study was reviewed and approved by the Animal Experimentation of Shanghai University of Traditional Chinese Medicine (permit number: PZSHUTCM191025005).
|
2023-01-24T14:25:25.068Z
|
2023-01-24T00:00:00.000
|
{
"year": 2022,
"sha1": "80271d7fc9df66933f64c7fc3b2ebd4b075dfcf1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "80271d7fc9df66933f64c7fc3b2ebd4b075dfcf1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
244119272
|
pes2o/s2orc
|
v3-fos-license
|
Supraphysiological Role of Melatonin Over Vascular Dysfunction of Pregnancy, a New Therapeutic Agent?
Hypertension can be induced by the disruption of factors in blood pressure regulation. This includes several systems such as Neurohumoral, Renin-angiotensin-aldosterone, the Circadian clock, and melatonin production, which can induce elevation and non-dipping blood pressure. Melatonin has a supraphysiological role as a chronobiotic agent and modulates vascular system processes via pro/antiangiogenic factors, inflammation, the immune system, and oxidative stress regulation. An elevation of melatonin production is observed during pregnancy, modulating the placenta and fetus’s physiological functions. Their impairment production can induce temporal desynchronization of cell proliferation, differentiation, or invasion from trophoblast cells results in vascular insufficiencies, elevating the risk of poor fetal/placental development. Several genes are associated with vascular disease and hypertension during pregnancy via impaired inflammatory response, hypoxia, and oxidative stress, such as cytokines/chemokines IL-1β, IL-6, IL-8, and impairment expression in endothelial cells/VSMCs of HIF1α and eNOS genes. Pathological placentas showed differentially expressed genes (DEG), including vascular genes as CITED2, VEGF, PL-II, PIGF, sFLT-1, and sENG, oncogene JUNB, scaffolding protein CUL7, GPER1, and the pathways of SIRT/AMPK and MAPK/ERK. Additionally, we observed modification of subunits of NADPH oxidase and extracellular matrix elements, i.e., Glypican and Heparanase and KCa channel. Mothers with a low level of melatonin showed low production of proangiogenic factor VEGF, increasing the risk of preeclampsia, premature birth, and abortion. In contrast, melatonin supplementation can reduce systolic pressure, prevent oxidative stress, induce the activation of the antioxidants system, and lessen proteinuria and serum level of sFlt-1. Moreover, melatonin can repair the endothelial damage from preeclampsia at the placenta level, increasing PIGF, Nrf-2, HO-1 production and reducing critical markers of vascular injury during the pregnancy. Melatonin also restores the umbilical and uterine blood flow after oxidative stress and inhibits vascular inflammation and VCAM-1, Activin-A, and sEng production. The beneficial effects of melatonin over pathological pregnancies can be partially observed in normal pregnancies, suggesting the dual role of/over placental physiology could contribute to protection and have therapeutic applications in vascular pathologies of pregnancies in the future.
INTRODUCTION
The control of blood pressure results from the contribution of several tissues and neural circuits via the multifactorial interaction of several physiological factors such as the heart rate, cardiac output, and peripheral resistance. The peripheral resistance determines the peripheral blood circulation, dependent on the arterial and venous tone. The chronic elevation of blood pressure (persistently raised pressure, >140 mmHg/90 mmHg) or hypertension is a severe medical condition associated with elevated risk factors for morbidity and mortality worldwide, a major cardiovascular risk factor. These pathologies are present between 20 and 25% of the population, or about 1.13 billion people worldwide (World Health Organization, and World Health Organization, 2013), and affect about 8% of reproductive-aged women, representing about 688 million women (Mupfasoni et al., 2018;Braunthal and Brateanu, 2019). The severe expression of hypertension is named malignant-hypertension or accelerated-hypertension, affecting about 2-7 cases per 100,000 habitants. This rate increases every year (Shantsila and Lip, 2017), suggesting that this pathology and its more extreme variations have become a massive health problem in terms of morbidity and mortality worldwide.
Several factors modulate vascular circulation and blood pressure, which can be divided into intrinsic and extrinsic pathways, modulating the vascular tone, coagulation, and the vascular system's flow. Intrinsic regulation pathways involve the paracrine production of endothelial cells, periadventitial adipose tissue, and vascular smooth muscle cell. The extrinsic regulation factor involves neuronal regulation such as sympathetic/parasympathetic innervation and the humoral secretion from the endocrine system. The intrinsic occurs via the paracrine liberation of cytokines, gasotransmitters, growth factors, vasoactive peptides, vascular protective agents, anticoagulant, angiogenic peptides, and others, which maintain the vasomotor and mitogenic balance required for an adequate vascular tone in the peripheral circulation (Konukoglu and Uzun, 2017;Gheibi et al., 2018;Oparil et al., 2018). These complex interactions require a supraphysiological regulation that includes the participation of the neurohumoral system that includes the renin-angiotensin-aldosterone system (RAAS), the circadian system, and melatonin production by the pineal gland see Figure 1 (Baker and Kimpinski, 2018;Nakashima et al., 2018;Oparil et al., 2018;Zuo and Jiang, 2020). Disruption of the intrinsic or extrinsic factors involved in blood pressure regulation can induce elevation and non-dipping blood pressure resulting in damage over vascular cells or tissues. Moreover, these factors can be affected by nutrition, environment, fetal programming, adiposity, diet, sodium and potassium intake, alcohol intake, smoking, physical activity, air pollution, and stress which give multivariable causes and expressions for hypertension (NCD Risk Factor Collaboration (Ncd-RisC), 2017). However, multifactorial gene-environment etiology is associated with 90-95% of patients with primary hypertension, besides showing an association with a genetic component in about 35-50% of patients, suggesting the relevance of finding new pathways and new molecular markers to help predict the risk of morbidity and mortality by hypertension (Oparil et al., 2018).
Melatonin has a role as a chronobiotic agent synchronizing the circadian system and plays a supraphysiological role in modulating other vascular system processes via modulation of inflammation, the immune system, and oxidative stress. This role was first described in the late-1960s in Pinealectomy in rats (Zanoboni and Zanoboni-Muciaccia, 1967). This study observed an increase of 30% in blood pressure (hypertension) at 15 days after surgery (Zanoboni and Zanoboni-Muciaccia, 1967). Melatonin supplementation can partially revert the harmful effects of this hypertension, as lipoperoxidation, hydroxyl radical generation, superoxide anion radical, and inducing antioxidant capacity via increased glutathione (GSH) content (Mukherjee et al., 2010). Moreover, melatonin plays a protective role in restoring hemodynamic parameters after myocardial injury, explaining blood pressure reduction in pathological conditions (Mukherjee et al., 2010), induced arterial vasorelaxation after vasoconstrictor treatment, and prevented the vasoconstriction at the level of cerebral arteries (Torres-Farfan et al., 2008;Qiu et al., 2018). Patients treated with melatonin supplementation reduce by about 10% in the MAP and SBP, not altering the heart rate (Bazyar et al., 2021), suggesting a protective role over the cardiovascular function of melatonin hormone by their capacity antioxidant and hypotensive role over the cardiovascular system.
The cardiovascular disease of the mother/offspring induces an adverse intrauterine environment which gives outputs such as fetal hypoxia, intrauterine growth restriction, gestational hypertension, and Preeclampsia. During normal pregnancy, melatonin production increase depending on gestational age and falls immediately after delivery (Nakamura et al., 2001;Ejaz et al., 2020). The impaired melatonin production is associated with complications during pregnancy, such as severe Preeclampsia, hypertension, and proteinuria. During severe Preeclampsia, Melatonin levels in women are reduced (Dou et al., 2019). However, melatonin supplementation can reduce oxidative stress and hypertension during pregnancy. Suggesting that melatonin production during pregnancy maintains cardiovascular health, reducing premature birth and abortion (Valenzuela et al., 2015;de Chuffa et al., 2019).
However, potentially, another genetic component can work during hypertension, and these genes can be modulated by melatonin. For this purpose, we searched the Differentially Expressed Genes (DEG) during hypertension detected in the tone regulation by vascular smooth muscle contraction. We found 36 upregulated and downregulated genes in vascular smooth muscle contraction pathways during hypertension see Table 1. This suggests that there are a number of pathways involved in this pathology and the complex pathways involved in vascular smooth muscle.
HYPERTENSION AND PREGNANCY
The American College of Obstetricians and Gynecologists (ACOG) defines hypertension during pregnancy as a pressure ≥ 140/90 mm Hg for systolic and/or diastolic BP FIGURE 1 | Modulation of blood pressure. Intrinsic and extrinsic pathways can modulate blood pressure. The principal regulator of both pathways is the neurohumoral system Renin-Angiotensin-aldosterone system, the circadian system, and melatonin production. (Bello et al., 2021). Following this criterion, hypertension can affect about 8% of women of reproductive age and present in about 10% of pregnancies (Braunthal and Brateanu, 2019). Differentially expressed genes associated with Hypertension (N = 36 genes) Analyses of datasets from the GEO database (http://www.ncbi.nlm.Nih.gov/geo) available for hypertension were performed in the GEO database (n = 73). We excluded the platforms without gene accession numbers, incomplete incoming information, and results of peripheral blood. Platforms GSE74288, GSE113613, GSE89073, GSE105114, GSE105114, GSE105114, GSE84704, GSE53363, GSE72707, GSE72181, GSE64613, GSE69601, GSE46863, GSE53408, GSE45927, GSE59437-BRAIN, GSE50833, GSE48936, GSE43292, GSE40182, GSE26671, GSE30428, GSE24988, GSE19817, GSE16624, and GSE5488 were visualized using the GEO Profile graphics a web tool to compare the two groups using Benjamin and Hochberg false discovery rate methodology, using the parameter by default (logFC ≥ 1 and adj. P < 0.05). For the functional analysis, we used the Kyoto Encyclopedia and Genes and Genomes (KEGG) and selected vascular smooth muscle contraction pathways. The gene expression profile was combined and identified with Venn Diagram, we then undertook the identification of DEGs.
During the pregnancy, about 4,3% corresponded to chronic hypertension, and 6% were defined as gestational hypertension, which during the pregnancy elevates the negative outputs for the mother and fetus, such as preeclampsia, preterm birth, and the baby being small for gestational age (Bello et al., 2021). Impaired placentation is the principal cause of complications in pregnancy. It causes several negative outputs, including hypertension, and their severe expression occurs in about 2% and induces about 16% of all maternal deaths suggesting the relevance of prevention of hypertension and preeclampsia (Romero et al., 2011;Poniedziałek-Czajkowska et al., 2021). Preeclampsia involves the vasculature due to the impaired transformation of the spiral arteries and the reduced perfusion to the fetus and the placenta, with an elevation of oxidative and endoplasmic reticulum stress and finally, inducing a fetal growth restriction. The systemic stress shoddy remodeling of the uteroplacental spiral arteries release placental factors to maternal circulation, increasing the maternal inflammatory response and oxidative stress such as high production of proinflammatory cytokines/chemokines such as IL-1β, IL-6, and IL-8 (Nunes et al., 2019;Valencia-Ortega et al., 2019;Poniedziałek-Czajkowska et al., 2021;Spence et al., 2021). Moreover, the impaired invasion of the uterine wall by trophoblast induce hypoxia, and produce a modification of placental secretion of critical angiogenic and antiangiogenic factors such as vascular endothelial growth factor (VEGF) (Frigato et al., 2009) and Placental lactogen (PL-II) member of the prolactin gene family. These vascular factors and their impaired secretion have been proposed to predict risk during pregnancy (Wang and Zhao, 2010;Valenzuela et al., 2012;Lenke et al., 2019). Besides, VEGF and placental growth factor (PlGF) induce placental angiogenesis via activation of VEGFR-1/Flt-1 and VEGFR-2/KDR, and both factors increase endothelial cell adhesion, chemotaxis and increase angiogenesis (Helske et al., 2001;Poniedziałek-Czajkowska et al., 2021).
Several antiangiogenic factors are highly secreted during pathological pregnancies, such as Fms-like tyrosine kinase-1 (sFlt-1), which is the soluble secretion of VEGF Receptor-1, and their secretion by villous cytotrophoblasts cells induced by hypoxia during Preeclampsia via activation of HIF1α. Another antiangiogenic factor secreted during Preeclampsia is soluble Endoglin (sEng), which reduces the proangiogenic and vasodilator effects of Endoglin. Finally, the elevated production of sFlt-1 and sEng and the decrease of VEGF and PIGF secretion induce an impairment of endothelial function, causing preeclampsia (De Oliveira et al., 2013;Shah and Khalil, 2015;Poniedziałek-Czajkowska et al., 2021).
The impaired production of pro/antiangiogenic factors, added to ROS stress, low activity of endothelial nitric oxide synthase (eNOS), low production of nitric oxide (NO) is characteristic of preeclampsia. This last gasotransmitter is a potent vasodilator, which is critical for appropriate trophoblast remodeling of spiral arteries (Poniedziałek-Czajkowska et al., 2021).
Sirtuin 1 (SIRT1) plays a critical role during pregnancy, via a stress-response and chromatin-silencing factor associated with a NAD-dependent histone deacetylase activity associated with DNA replication, DNA repair, an extension of life span, and reduction of apoptosis. Moreover, SIRT1 reduces the release of proinflammatory cytokines via inhibition of NFkappa-B signaling (Chen et al., 2005;Abdelmohsen et al., 2007;Poniedziałek-Czajkowska et al., 2021). This signaling pathway is induced by AMP-activated protein kinase or AMPK . The SIRT1 expression in the placenta and the plasmatic concentration showed a low level in preeclamptic women at 20-25 weeks of gestation (Yin et al., 2017;Viana-Mattioli et al., 2020;Poniedziałek-Czajkowska et al., 2021).
The AMPK pathway is activated by increasing AMP levels and decreasing ATP levels (low energy). The AMPK is a heterotrimeric protein composed of αβγ subunits and expressed ubiquitously. The catalytic α subunit has two isoforms, but the α1 subunit (AMPK1) is the predominant subunit in Vascular smooth muscle cells (VSMCs) and endothelial cells, playing a role in vascular remodeling in atherosclerosis and pulmonary hypertension . The AMPK1 gene is expressed in uterine arteries and placenta, and their expression increased during pregnancy exposed to hypoxia or models of preeclampsia (Skeffington et al., 2016), which is associated with the early onset of preeclampsia in humans . The pharmacological activation of AMPK1 can increase capacity to reduce the fetal restriction induced by hypoxia by an increase of uterine artery flow by approximately twofold (Lane et al., 2020), and these increases of blow flow can be stimulated by dependent via of NO (≈40%) and independent pathways (Skeffington et al., 2016), suggesting this protein plays a critical role in the vascular system of the placenta.
Another genetic component could potentially work during vascular pathologies such as preeclampsia. For example, an analysis of the transcript levels of 14,040 genes in the placenta from mothers with normotensive symptoms or hypertension during pregnancy was undertaken by Cox et al. (2019). The present re-analysis of data available in the GEO database 1 aims to find genes dysregulated at the vascular level that are associated with preeclampsia. Several pathways identified in the Kyoto Encyclopedia of Genes and Genomes (KEGG) can modulate vascular function. We identified the genes that were the most enriched and associated with the KEGG pathway as "negative regulation of vascular smooth muscle cell proliferation, " "vasculature development, " "vascular endothelial growth factor receptor signaling pathways, " "vascular wound healing, " "coronary vasculature development, " "vasculogenesis, " and "vascular endothelial growth factor receptor signaling pathways" see Table 2. Among the genes we identified, CITED2 plays a role in vasculogenesis, a critical gene in the cellular response to hypoxia, and showed the capacity to inhibit HIF1α activation and cellular response to hypoxia (Berlow et al., 2017). During preeclampsia, the activation of Endoplasmatic Reticulum stress induces the secretion of extracellular vesicles and the inhibition of CITED2 expression in the placenta (Collett et al., 2018). The CBP/p300-interacting transactivator, with glu/asp-rich c-terminal domain-2 or CITED2, plays a role in trophoblast differentiation and is expressed in vascular endothelial trophoblast cells. Their deletion induces placental malformation, decreasing placenta and embryo weight and reducing the number of placental malformation syncytiotrophoblasts, resulting in embryo death (Moreau et al., 2014;Imakawa et al., 2016). This suggests that these genes may be a new target of study. Similarly, another transcriptional via detected in the vasculogenesis pathway is oncogene JUNB (a subunit of AP1 factor), and their impaired expression in the placenta can elevate the risk of Preeclampsia. For example, the elevation of JUNB expression during pregnancy gives a high expression of Phosphatase and tensin homolog or PTEN protein and a reduction of approximately 50% of trophoblast invasiveness (Xue et al., 2020). Similarly, other studies in the cell line of the trophoblast suggest that elevated expression of JUNB can elevate the proliferation, migration, and stimulation of angiogenesis (Zou et al., 2018). In contrast, JUNB is downregulated in placenta-derived Mesenchymal Stromal Cells from the woman with preeclampsia (Nuzzo et al., 2017), suggesting this gene plays a critical role during pregnancy and preeclampsia.
A differentially expressed gene detected in Table 2 is Cullin 7, or CUL7, a scaffolding protein expressed in all tissues and associated with ubiquitin ligase. Hypoxia during pathological pregnancies reduces CUL7 expression in the villous trophoblast and syncytiotrophoblast, inducing impaired placental development (Tsunematsu et al., 2006;Fahlbusch et al., 2012).
Another dysregulated gene detected in Table 2 is the Vascular endothelial growth factor type A (VEGFA), which is critical during pregnancy for endothelial cell proliferation, migration, and angiogenesis. That is elevated in maternal serum in PE cases at 30 weeks and is sequestered during preeclampsia by excessive production of sFLT-1, resulting in endothelial dysfunction, suggesting an excess of VEGF production might play a role in preeclampsia by VEGF toxicity and stimulation of sFLT-1 The analysis of the transcript levels of 14,040 genes in the placenta from mothers with normal or hypertension during pregnancy, obtained by Cox et al. (2019). Re-analysis of data available in GEO (https://www.ncbi.nlm.nih.gov/gds) to find genes dysregulated at the vascular level, and perform functional analysis of differentially expressed genes characterized by gene ontology (GO) and pathway enrichment analyses (DAVID Bioinformatics Resources 6.8, NIAID/NIH). Our approximation gives co-expression network analysis by functional pathways that are modified in the placenta by hypertension and have a role in vascular health. The most enriched KEGG pathway saw more negative regulation of vascular smooth muscle cell proliferation, vasculature development, vascular endothelial growth factor receptor signaling pathways, vascular wound healing, coronary vasculature development, vasculogenesis, and vascular endothelial growth factor receptor signaling pathways.
Frontiers in Physiology | www.frontiersin.org production (Jena et al., 2020). Women with preeclampsia showed an elevated level of SRC protein, but the activation by the phosphorylation in the Tyr-416 residue was lowered, suggesting a low activation compared with normal women. Downstream genes activated by SRC, including ERK1/2, p38, and JNK showed low phosphorylation in preeclampsia, demonstrating the inactivation of SRC or c-SRC, which are critical for trophoblast development and differentiation. MAPK14 or p38-alpha is critical for trophoblast development and differentiation, and their activation stimulates the PPARγ pathway. This pathway induces the vascularization and expression of Syncytin-1, a critical element in placentation. In contrast, the human placenta from Preeclampsia showed a low level of expression of MAPK14-PPARγ-Syncytin-1 genes (Ruebner et al., 2012). Fyn is an oncogene that plays a role in the ERK via transduction, stimulating the expression of the KCa3.1 channel, an endothelial vasodilator, and inhibiting blood pressure increases during the pregnancy. However, another study observed the downregulation of this pathway during preeclampsia .
Another gene observed in Table 2 is the Neutrophil cytosolic factor 4 or Ncf4, a subunit of NADPH oxidase, which is expressed in trophoblast cells after implantation (Gomes et al., 2012). This activity is the primary source of placental oxidative stress, which is a characteristic of preeclampsia. Similarly, another member detected is the subunit CYBA gene elevated in preeclampsia (Gomes et al., 2012;Trifonova et al., 2014), suggesting that the NADPH oxidase activity plays a critical role during the oxidative stress observed in the pregnancy. Moreover, elevated NADPH oxidase activity can modulate the production of sFlt-1 and PlGF, suggesting the critical role of this activity in developing preeclampsia (Hernandez et al., 2021).
The human placenta produces proteoglycans and the Glypican family are a member of these macromolecules. They are present in the Plasmatic Membrane by GPI anchor and can interact with VEGF. Glypican-3 (GPC3) modified their expression in pathological placentas (Table 2), playing an essential role in proliferation and differentiation and expressed in the human placenta. Low content is detected at the third trimester of gestation in samples of placenta and maternal serum from pregnancies diagnosed with Fetal Growth Restriction and preeclampsia (Chui et al., 2012;Gunatillake et al., 2019;Shimizu et al., 2019). Myosin heavy chain-10 or MYH10 modulates cell migration and invasion with preimplantation factor peptide or PIF to promote the implantation and remodeling of the uterine wall (Yang et al., 2018).
Another member of the Plasmatic Membrane observed in Table 2 is the Plexin D1 or PLXND1, which plays a role in signaling endothelial cells and their impaired expression is associated with vascular disease, inducing atherosclerotic lesions by macrophages and inhibiting angiogenesis via stimulation of soluble Flt-1, an inhibitor of VEGF . Similarly, we detected a differential expression in the pathological placenta of SMAD family member-6 or SMAD6 (see Table 2). When expressed postnatally this gene can modulate endothelial gene expression and participates in vascular development. Their mutation is associated with hypertension in children associated with renal arterial occlusive disease (Viering et al., 2020).
Preeclampsia is a two-stage disease with abnormal placentation and placental hypoxia by the impaired remodeling of the uterine wall. This affects the maternal endothelium and the production of Endothelial progenitor cells (EPC). The EPC circulating in maternal blood is characterized by CD34 antigen expression, and is used as a preeclampsia marker, related to vascular wound healing, and detected in pathological placenta ( Table 2). Over 20 weeks gestation, women with preeclampsia showed a high level of Endothelial progenitor cells (CD34+) than the normotensive women (Brown et al., 2019). In contrast, in early pregnancy a low level of CD34 + cells have been observed (Laganà et al., 2017), suggesting that they are a marker dependent on gestational week and pathology during pregnancy.
Adiponectin receptor 2 (ADIPOR2) is a G protein-coupled receptor expressed in the embryo and placenta. ADIPOR2 induces a low proliferation via inactivation of the JNK pathway in the placenta and stimulates the lipid metabolism in the embryo. The human placenta is an independent source of Adiponectin, and their production in the placenta has proinflammatory effects and antiproliferative effects during the first trimester over trophoblast cells. However, several reports suggest a contradictory level of Adiponectin can be potentially produced by an adiponectin resistance state (Barbe et al., 2019).
Heparanase (HPSE) is a dysregulated gene observed in the placenta ( Table 2) that plays a role in vascular wound healing, cleaving the heparan chain on the cell surface. Their products bind to sFLT-1, which promotes proliferation and invasion of trophoblast cells during early pregnancy (Che et al., 2018). However, their expression is elevated by hypoxia during preeclampsia, and this elevated activity stimulates the hypoxia-induced sFLT-1 release and inhibition of the proangiogenic function of VEGF (Ginath et al., 2015;Eddy et al., 2019). The bone morphogenetic protein receptor type-II or BMPR2 is a plasmatic membrane protein that transduces extracellular signals through the formation of heteromeric complexes, and their dysregulation plays a role during pulmonary hypertension vascular remodeling and endothelial dysfunction (Machado et al., 2003;D'Amico et al., 2018). We detected the dysregulated expression of BMPR2 in placentas from mothers with hypertension (see Table 2). Previous data show that BMPR2 and these ligands are critical for the maintenance of vascular development during pregnancy via VEGF production and invasion of the uterine wall and embryo placentation (Nagashima et al., 2013;You et al., 2021). Previous data suggest that Heparanase and BMPR2 can play a potential role during maternal hypertension in the placenta via inhibiting the proangiogenic effects of VEGF.
Another gene associated with the VEGF receptor signaling pathway ( Table 2) is Rho-associated coiled-coil-containing protein kinase-2 or ROCK2. It is a serine/threonine kinase expressed at a high level in the placenta from preeclamptic women, inducing actin cytoskeleton rearrangement in the trophoblast cell and shedding of Syncytiotrophoblast macrovesicles and exosomes, accompanied with a decreased outgrowing microvilli (Han et al., 2016). Similarly, heatshock 27-KD protein-1 or HSPB1 plays a role in the VEGF receptor signaling pathway and shows an impaired expression in pathological placenta with hypertension ( Table 2). Low expression of Het shock protein HSPB1 and HSP70 play a role in vascular alteration, and the umbilical artery flow modification detected in the placenta after premature birth (Dvorakova et al., 2017), which suggests that the relevance of different modulators can change the VEGF pathway in pathological placentas.
G protein-coupled estrogen receptor-1 (GPER1) proteins modify their expression in the placenta of mothers with hypertension ( Table 2). They are a mediator of estrogen signaling and protect the fetus during maternal inflammation and are associated with negative regulation of vascular smooth muscle cell proliferation. For example, GPER1 protein can prevent the adverse effects of type-I Interferon during maternal infection (Harding et al., 2021). Moreover, the level of GPER1 in placentas from preeclampsia reduced by about 50%, a reduction that can partially be associated with estrogen treatment in trophoblast culture , which correlates with elevated apoptosis and minor cellular proliferation in the placenta from preeclampsia . The functions of some of the genes detected in Table 2 and their relation with the vascular disease require further study to better understand the role of these genes during hypertensive pathology during pregnancy and their role in the placenta.
Several reports suggest that melatonin plays a role as a protector agent during pregnancy for the mother, fetus, and placenta physiology (Dou et al., 2019;Nagasawa et al., 2021;Sun et al., 2021). Melatonin could directly stabilize blood pressure in pathological pregnancies via modifying previously described genes or new targets, which requires more studies to be conducted during gestational hypertension, preeclampsia, and other pathologies.
MELATONIN AND HYPERTENSION
Melatonin receptors are associated with the activation of two G-protein-coupled receptors named MT1 and MT2, which, via Gi-and Gq-receptor activation, lead to decreased levels of cAMP and increased levels of cytosolic calcium. Both receptors participate in the temporal synchronization of the circadian system and sleep quality (Jockers et al., 2016). Diurnal animals and humans have shown high blood pressure during daytime hours, and a dip of about 10-20% during darks hours such as human correlated melatonin secretion. Similarly, circadian rhythms have been observed for heart rate, which is abolished by the impaired secretion of the pineal hormone (Fabbian et al., 2013;Tabara et al., 2018). In contrast, the absence of circadian rhythms of blood pressure elevates the risk for cardiovascular morbidity/mortality by ventricular hypertrophy, renal dysfunction, remodeling of carotid structure, cerebrovascular accident, hypertension, and stroke (Baker and Kimpinski, 2018). The relevance of these circadian rhythms can be observed in the pharmacological treatment of hypertension with an angiotensin II receptor blocker, which is more effective during the higher production of melatonin hormone or night hours (Giles, 2006), suggesting the relevance of circadian rhythms and melatonin signaling for cardiovascular health.
A correlation between high blood pressure and arterial stiffness has been observed in patients, and the risk is higher when the disruption of the circadian rhythms of blood pressure is more severe. This severity is associated with the minor amplitude of the circadian rhythms or, eventually, a flattening of the circadian pattern, not showing a lower systolic or diastolic pressure during the night hours (Park et al., 2019). For example, after liver transplantation, about 90% of patients observed a chronodisruption of blood pressure oscillation, with about 55% showing an arrhythmic pattern and about 36% showed an inverted pattern. This chronodisruption has been associated with poor glomerular filtration of Cystatin-C and plasmatic accumulation (Hryniewiecka et al., 2018), the latter being a marker for robust kidney injury, systemic inflammation, and mortality (Hendrickson et al., 2020).
The melatonin receptor is present in several vascular tissues such as the Circle of Willis and vertebral arteries, the caudal artery, aorta, coronary arteries and carotids, cardiac ventricular wall, and systemic arteries, suggesting that melatonin plays a role in various cardiovascular diseases (Baker and Kimpinski, 2018;Prado et al., 2018). For example, a low level of melatonin is detected in Coronary heart disease (5-fold), elevating the risk of infarction and death. This can occur because the suppression of melatonin production induces vascular vasoconstriction and hypertension, and their supplementation reduces the blood pressure, inflammation, vascular infiltration of lymphocytes, aldosterone levels, and lowers the risk of deaths caused by myocardial infarction via reduction of oxidative stress (Baker and Kimpinski, 2018;Prado et al., 2018;Simko et al., 2018). Similarly, newborn sheep supplemented with melatonin showed reduced pulmonary arterial pressure. Moreover, the elevation of the vascular vasodilatation, which occurs via elevation of antioxidant capacity by stimulation of antioxidant activity SOD, CAT, GPx, causes induction of vasodilator genes and inhibition of vasoconstrictor gene response (Gonzaléz-Candia et al., 2020), suggesting the ubiquitous effects of this hormone in several vascular territories, lowering the risk of cardiovascular disease.
Interestingly, patients with pulmonary hypertension showed a low plasmatic level of melatonin and elevated levels of IL-1β. When analyzing animal models, supplementation with melatonin inhibits hypoxia-induced thickness and the remodeling of the pulmonary artery. It reduced the expression level of cytokine proinflammatory IL-1β in pulmonary tissue 3-fold and reduces macrophage activation (Zhang et al., 2020). A similar result was observed in gestational hypertension induced by L-NAME, melatonin supplementation can lower systolic blood pressure by about 10% and urine protein content by about 30%, increasing the antioxidant capacity of rats by about 28%, and lowering the sFlt-1 level circulation in about 29% of cases (Zuo and Jiang, 2020).
A study in patients with type 2 diabetes and hypertension demonstrated that about 30-32% of non-dipping people treated with 3-5 mg of melatonin saw a restoration of the dipping for systolic blood pressure, diastolic blood pressure, and mean arterial pressure during the dark hours, suggesting that melatonin could synchronize the circadian oscillation of blood pressure in about one-third of patients (Możdżan et al., 2014). A similar observation was seen in animal models where melatonin administration reduced hypertension in animals with metabolic syndrome (Baker and Kimpinski, 2018). Moreover, melatonin has the dual capacity to modulate vascularization depending on the cellular condition. For example, in pathological tissues exposed to a lesion, melatonin induces angiogenesis, such as skin, bone, and gastric ulcers. This effect occurs because melatonin induces endothelial expression and secretion of VEGF, stimulating neovascularization (Ma et al., 2020).
Patients with dyslipidemia and atherosclerosis showed a low level of melatonin production, which lowers the plasmatic level of fibrinogen, FVIII, and leads to the inhibition of platelet aggregation (Otamas et al., 2020). A similar observation was made in postmenopausal women with prevalent hypertension, which showed a reduction of 26% in the urinary metabolite of melatonin 6-Sulfatoxy-Melatonin, and this chronic low-level melatonin elevates the risk of hypertension by about 17-23%. The risk was elevated by 60% when the patient reported using alcohol or medication to sleep (Pérez-Caraballo et al., 2018). A critical element for inducing vascular vasodilation is nitric oxide, which can be induced by melatonin, producing vasodilation, lowering blood pressure, and reducing Endothelin and Angiotensin II effects on humans umbilical vein endothelial cells (Baker and Kimpinski, 2018).
Hypertension and valvular dysfunction can induce heart failure and hypertrophy. The aortic constriction induces cardiac hypertrophy markers of natriuretic protein ANP, BNP, and β-MHC. However, these can be reverted by melatonin supplementation. Similarly, the apoptosis markers, caspase-3, cytochrome-c, and Bax, and the autophagy are lowered by melatonin treatment, suggesting the protective role of melatonin after aortic constriction. This inhibition of cardiac hypertrophy can occur due to the capacity of melatonin to stimulate the protein activation of p-mTOR, p-AKT, and the activation of the down pathways p-S6K and p-4E-BP1 (Xu et al., 2020).
Several unknown pathways could potentially explain some of the effects of melatonin. For the functional analysis of the pathways correlated between melatonin and hypertension, we performed the search Analysis of Datasets of the GEO database 2 . Queries were performed using the "MELATONIN" keyword after a systematic SEARCH restricted to specific fields. We downloaded 39 experimental results for melatonin, and excluded the platforms without gene accession numbers, incomplete incoming information, cancer cells, transgenic animals, knockout, or experiments with modification of photoperiod. We then obtained experimental platforms "GSE92612" and "GSE169459" with vascular smooth muscle contraction pathways modified by melatonin. We observed 115 common genes, see Table 3. Furthermore, the genes that were combined and identified with Venn Diagram showed 35 common elements between "Hypertension" ( Table 1) and "Melatonin, " see Table 4. The differentially expressed genes obtained here showed several examples modified by melatonin over the vascular tone, which require further study into their potential role in hypertensive pathology during the pregnancy and their therapeutic role in 2 http://www.ncbi.nlm.Nih.gov/geo Differentially expressed genes associated with melatonin (N = 115 genes) Functional analysis of pathways of melatonin by analysis of datasets from the GEO database (http://www.ncbi.nlm.Nih.gov/geo). The systematic search was restricted to the following specific fields: expression profiling by the array. In total, 39 experimental results for melatonin were downloaded, and we excluded the platforms without gene accession number, incomplete incoming information, cancer cells, transgenic animals, knockout, or experiments with modification of photoperiod. After this revision, we obtained experimental platforms "GSE92612" and "GSE169459," which were analyzed by GEO2R and selected vascular smooth muscle contraction pathways. Both experiments had 115 common genes. Tables 1, 3. We identified 35 common elements between "Hypertension" and "Melatonin" associated with vascular smooth muscle contraction.
protecting the placenta/fetus/mother from the adverse effects of hypertension.
Melatonin and Pregnancy
Several studies reported a relationship between hypertension and melatonin, inducing negative outputs during pregnancy via FIGURE 2 | Melatonin can modify molecular pathways in the placenta. Given the supraphysiological effect of the melatonin hormone, several pathways that affect vascular function are inhibited (-) or stimulated (+) in the placenta. These effects involve the hypoxia, pro/antiangiogenic, vasodilatory, metabolic, oncogenic, antioxidant, and proinflammatory pathways.
modifying the endothelial function, antiplatelet effects, vascular tone, vasoactive factor production, and oxidative stress. For example, during severe preeclampsia, melatonin production and the expression of MT1 and MT2 receptors are lower than in normal pregnancies. Moreover, the supplementation of melatonin to patients can delay the delivery and reduce oxidative stress and hypertension during pregnancy (de Chuffa et al., 2019). VEGF production during normal pregnancy is inhibited when the mother produces a low level of melatonin, increasing the risk of premature birth and abortion (Valenzuela et al., 2015;de Chuffa et al., 2019). During normal pregnancy, a progressive increase in melatonin levels is observed, and non-dipper blood pressure pregnant women with preeclampsia showed more severe hypertension correlated to a minor level of melatonin production during dark hours (Bouchlariotou et al., 2014). The placenta is an extra-pineal gland site for the synthesis of melatonin hormones, and placentas from preeclampsia have shown a low level of expression of the critical enzyme of melatonin synthesis in placenta AA-NAT and HIOMT. Interestingly, melatonin supplementation has antioxidant effects on the placenta and reduces the levels of sFlt1, Activin-A, and sEng, reducing trophoblastic debris from the early trimester placentae exposed to preeclamptic serum. The trophoblast mitochondria synthesize melatonin locally, protecting the mitochondrial and the respiratory function, a critical protagonist during placental hypoxia induced by preeclampsia (Langston- Cox et al., 2021). It prevents the oxidative stress of the placenta, inducing the activation of the antioxidant system via elevation of Nrf-2 translocation, a potent inductor of mitochondrial activity and biogenesis (Hobson et al., 2018). Melatonin supplementation during pregnancy in animal models increases umbilical blood (Thakor et al., 2010;Langston-Cox et al., 2021), protecting the endothelial function, repairing the endothelial monolayer, inhibiting vascular inflammation and VCAM-1 production in placentas obtained from preeclamptic women, and reducing blood pressure and sFLT-1, markers of vascular damage during preeclampsia (Hung et al., 2013;Reiter et al., 2017;Hobson et al., 2018;de Chuffa et al., 2019). Additionally, in women with early onset of preeclampsia, melatonin supplementation prolonged the interval from diagnosis to delivery in 6 days and required minor doses of antihypertensive treatment (Hobson et al., 2018), suggesting the partial inhibition of adverse effects of preeclampsia.
The umbilical blood sample collected at term from pregnancies affected by intrauterine growth restriction, or IUGRA, showed a lower level of melatonin circulation (∼50%). This reduction occurs parallel to the reduced circulatory levels of the angiogenic factor PIGF observed in the umbilical blood (Hobson et al., 2018;Berbets et al., 2020). The supplementation of melatonin in an animal model of gestational hypertension can lower the systolic blood pressure and urine protein content and ameliorate placental weight reduction. Moreover, it reduces the antiangiogenic production of sFLT-1, increases the proangiogenic factor PIGF, and increases the mother's antioxidant capacity (Zuo and Jiang, 2020). Similarly, melatonin reverted partially placental impaired perfusion, placental coagulation, and induced anti-inflammatory factors in mouse pregnancy associated with intrauterine inflammation-related oxidative stress . Reduction of oxidative stress and improvement of the placental perfusion induced by melatonin can occur by an improved endothelial function via increased nucleus translocation of Nrf2 and elevation of endogenous antioxidant enzymes heme-oxygenase-1 (Hobson et al., 2018). These antecedents suggest the hormone melatonin's various actions on placental function and its potential role in modulating several modified pathways in pathological pregnancies (see Figure 2).
CONCLUSION
Melatonin hormone has antioxidant, homeostatic, and timegiving roles at the level of the vascular system. The temporal desynchronization of the vascular system by inhibition of melatonin production induces pathology such as hypertension. Melatonin supplementation shows a protective role over the vascular system, reverting elevation of blood pressure, oxidative stress, and antiangiogenic factors. During pregnancy, impaired production of melatonin can elevate the risk of poor fetal/placental development by preeclampsia, intrauterine growth restriction, and preterm birth. Melatonin can protect the pregnancy via stimulation of the antioxidant system, vascular factors such as VEGF, PIGF, and by inhibiting antiangiogenic factors such as sFLT-1 and sEng. Current evidence describes an elevation of melatonin production during pregnancy by the placenta, and we believe that local production is a keystone molecule in placental physiology. In this regard, we propose that melatonin plays a supraphysiological and dual role over placental physiology and could be the future for the protection and therapeutic application of vascular pathologies of pregnancies.
AUTHOR CONTRIBUTIONS
FV-M, CL, and GD contributed to conception and analysis. KJ-M and CL organized the database. FC-P and FV-M performed the bioinformatic analysis. FV-M wrote the first draft of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
|
2021-11-16T14:28:37.581Z
|
2021-11-16T00:00:00.000
|
{
"year": 2021,
"sha1": "77e4054084ce11f333acd2a6c1e42cfa0fc69bb9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.767684/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77e4054084ce11f333acd2a6c1e42cfa0fc69bb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
230609284
|
pes2o/s2orc
|
v3-fos-license
|
On-line monitoring of mine tunnel deformation using laser radar
Mine tunnel deformation blocks underground coal mine traffic, or even cause serious security incidents. Aiming at the problem of lack of monitoring method of tunnel deformation, this paper presents a method of tunnel deformation measurement based on laser-radar technology. The measurement method is a non-contact and on-line measurement method. The system is mainly composed of two parts, the host part is composed of computer, ZigBee module and processing software system based on Processing, the slave part is composed of four-rotor craft, STM32 microprocessor, laser radar, ZigBee wireless communication module and power supply module. The new tunnel deformation measurement method based on radar proposed in this paper has the advantage of smaller measurement data error, the data communication and processing speed is qualified and suitable for on-line monitoring.
Introduction
There are a large number of tunnels in underground coal wells, such as lifelines of the staff, material transport lines and safe ventilation lines, and tunnel deformation occurs often with the tunnel development, resource extraction, geological activities, etc. Tunnel deformation will block the traffic within the tunnel, and even cause serious security incidents [1]. Therefore, the detection of mine tunnel deformation is of great significance to the safety of coal mine production. The tunnel deformation monitoring methods can be classified as contact measurement and non-contact measurement [2]. Most of the contact measurement methods rely on displacement sensors, pressure sensors and other terminal monitoring equipment, and sensors need to be placed in the appropriate interval multiple tunnels, which will cause complicated operation and improve cost of measurement system. And the contact measurement itself will cause damage to tunnels in a certain extent. Non-contact measurement composes of the mechanical, electronic technology, computer technology and other multi-disciplinary research results. The non-contact measurement method also has the advantage of fast measurement and the amount of data is more convenient to follow-up research. Commonly used non-contact measurement technologies include laser ranging sensors, digital photogrammetry, ultrasonic technology, composite image monitoring et al. A dynamic roadway deformation monitoring system based on phase laser ranging sensor was proposed by Xu et al. [3], which can monitor roof subsidence and local deformation of mine tunnel. Xu et al. [4,5]. conducted experimental research on roadway deformation by using digital photography, and the experimental results showed that the technology was feasible and effective. Li et al. [6] used ultrasonic technology to monitor mine tunnel deformation, and verified that ultrasonic technology has certain advantages. A study on downhole deformation monitoring based on composite image monitoring technology was proposed by Zhang et al. [7]. In this paper, the proposed method does not need to do any treatment with the wall of tunnel, the laser radar mounted in the four-rotor craft (UAV), the mobile scanning of the entire tunnel sidewall data, and through wireless communication to the host computer for data processing [8], the whole system is simple, fast measurement, to overcome shortcomings of existing methods.
Overall program of the measurement and control system
The overall program of this study can be divided into two parts: host computer part and slave computer part. The slave computer is mainly used for the collection and transmission of tunnel contour data. The host computer is mainly used for the data processing, 3D reconstruction of tunnels and deformation evaluation. The whole process involves hardware systems and software systems, and the overall block diagram is shown as Fig. 1
Introduction of system
The hardware system relates to the power module, the RPLIDAR laser radar, the four-rotor craft (UAV), the STM32 single chip computer, the cc2530ZigBee sending module and the host computer's cc2530ZigBee receiving module and the computer for the host computer processing. Hardware and the function of the link between the specific is shown in Fig. 2
RPLIDAR laser radar
The design uses a new generation of low-cost two-dimensional laser radar RPLIDAR A2 developed by SLAMTEC Company [9,10]. RPLIDAR mainly includes the laser ranging core and the power and mechanical parts that rotate the laser ranging core at high speed, RPLIDAR system performance parameters are shown in Table 1. When the RPLIDAR is working normally, the ranging core will begin rotating clockwise scanning. The user can obtain the RPLIDAR scanning and ranging data through the RPLIDAR communication interface, and control the start, stop and rotation speed of the rotary motor through PWM.
Laser triangulation ranging technology is adopted by RPLIDAR, coupled with SLAMTEC developed high-speed visual acquisition and processing institutions, it can carry out up to 16,000 ranging actions per second [11]. Every time range in the process, RPLIDAR will launch through the infrared laser modulation signal, the laser signal generated after exposure to the target object reflective will be RPLIDAR visual acquisition system receiving, then through embedded within the RPLIDAR real-time DSP processor, the exposure to the target object and RPLIDAR distance and angle of the current information from the output of the communication interface. The schematic diagram of RPLIDAR system is shown in Fig. 3.
STM32F103C8T6 microcontroller
STM32F103C8T6 Microcontroller is adopted in the system [12,13], as shown in Fig. 4. The model single-chip crystal frequency multiplier up to 72 MHZ, running fast, low cost, strong anti-interference, applicable to the research system. The design mainly related to the following features STM32F103C8T6. 1) Timer PWM output function; 2) Serial communication; 3) Analog voltage input ADC.
LabVIEW, a graphical programming language, is adopted by this system for program design. The program is divided into two parts: 1) Main program of laser tunnel detection slave computer; 2) Host computer program for laser tunnel detection. The upper and lower computer communication adopts cc2530 ZigBee wireless communication for data transmission. The main program of the lower machine for laser tunnel detection is shown in Fig. 5, and the program of the upper machine for laser tunnel detection is shown in Fig. 6.
Cc2530 ZigBee wireless communication
The system uses the cc2530 ZigBee wireless communication transceiver module for data mutual transmission, as shown in Fig. 7. ZigBee is a short distance wireless communication technology with low cost, low power consumption and ad-hoc network, which is widely used in environmental monitoring, smart home and other occasions. The ZigBee module needs to be configured before use. There are five steps in configuration, including 1) Connect the device and select serial port number; 2) Set the node type (coordinator or router); 3) Set the radio channel and PAN ID; 4) Set the node number and baud rate; 5) Set the data transmission mode.
Power supply module
The machine hardware to be mounted on the four-rotor unmanned aerial vehicles, so the power supply using 3S lithium battery as a system power supply [14]. As the 3S lithium battery, the voltage range of 11.1-12.6V DC, and the system supply voltage should be 5V DC power supply, it is necessary to design a DC to DC power supply buck circuit [15]. In this paper, high-power DC-DC adjustable step-down module based on XL4015E buck chip is adopted.
Tunnel side wall data collection
Tunnel wall data acquisition is based on the next bit machine software program to achieve. The scan request is sent to the RPLIDAR mounted on the UAV, then the RPLIDAR will return the response data. The STM32 will then read the response data in real time. After the preprocessing, the data will be sent to the host computer via the ZigBee module. Driven by the UAV, RPLIDAR will scan the entire mine tunnel, so the host computer can obtain the data of the entire mine tunnel sidewall.
Tunnel side wall data collection
The data processing and image construction of this paper is based on Processing3.3 software. This paper mainly uses the size (width, height, renderer) function to make the processing work in 3D mode, and then declare the core class, using the Scene (parent) function to build the 3D environment [16], so that we can get a zoom in and out of the 3D scene The In this paper, the use of ControlP5 library cp5.addButton (string). Set Position (X-Position, Y-Position) is respectively, to create a Start button and Stop button, respectively, with the next machine to send and stop the command. The results of its creation are shown in Fig. 10.
PC master software program flow chart is shown in Fig. 11.
Tunnel simulation experiment
The system simulates the tunnel in the corridor for 3D image reconstruction. In the night corridor lights and lights off the two environments to simulate, as shown in Fig. 12.
First download the next bit machine program, connect the hardware device, as shown in Fig. 13. Experiments show that in the corridor lights and lights under the two environments scanning the same image (as shown in Fig. 14), indicating that the system is not limited by the light inside the tunnel.
Error analysis
According to the experimental survey corridor contour data, respectively, take 0° and 180°, 90 ° and 270 ° corresponding to the four points (that is, laser radar up and down and left and right four points) distance values, respectively, in Fig. 15 A, C and B, D points, the size of which were , , , said. This can be calculated by measuring the upper and lower corners of the corridor height value , with the left and right width value . Respectively:
Fig. 15. Error analysis point distribution
While the actual height of the corridor is = 2400 mm, the left and right width is = 2000 mm. The system measured the corridor height and width of the relative error can be calculated as below: The numerical analysis of the specific error is shown in Table 2. Respectively. The system error can be calculated about 0.3 %. As the side walls of the tunnel are the surrounding rock, once the deformation, the change will generally reach more than 50 mm, and the height of the tunnel is generally not less than 1800 mm, deformation caused by the height change rate of more than 5 %, much higher than the system error. The system measurement error meets the actual tunnel deformation measurement needs.
Conclusions
In this paper, a tunnel deformation measurement method based on radar technology is proposed, and a tunnel deformation detection system based on RPLIDAR radar is designed. Through the next bit machine control processing core STM32 microcontroller and laser sensors, ZigBee organically combined to achieve the lower computer data acquisition system. Through the data communication with the host computer, the collected data will be sent to the processing program, the program will process the data display, which realizes the simple reconfiguration of the two-dimensional plane and the 3D image, and achieves the fast and low error detection of the tunnel deformation.
|
2020-12-10T09:05:48.945Z
|
2020-12-07T00:00:00.000
|
{
"year": 2020,
"sha1": "e3e477442e1c9934d19807d3fc66704375e8b266",
"oa_license": "CCBY",
"oa_url": "https://www.jvejournals.com/article/21794/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bac650437a204edd965ab0ffe5b506fe427c9cd2",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
}
|
213236335
|
pes2o/s2orc
|
v3-fos-license
|
Electrical treeing and partial discharge characteristics of silicone rubber filled with nitride and oxide based nanofillers
ABSTRACT
INTRODUCTION
Electrical treeing is a pre-breakdown phenomenon caused by endless electrical stress in polymeric insulations.Triggered by partial discharge (PD), electrical treeing occurs under electrical stress through tree like paths.The growth of an electrical tree consequences in the formation of a conductive path between the high-voltage and grounded parts of the insulation, therefore causing breakdown [1,2].Electrical tree activities have also been described to take place within the areas of weak points such as cable accessories due to electric field localization which is accompanied by PD.In application, cable accessories such as joints, terminations, and stress cones are usually made by silicone rubber (SiR).SiR has characteristics of both inorganic and organic materials.It also offers a number of benefits that not found in other organic rubber such as hydrophobic property, high temperature resistance, electrical insulation, good chemical stability, and flame retardants but it has poor solvent resistance and mechanical properties [3,4].In order to enhance the electrical tree resistance in the high-voltage cable accessories, different methods have been used by manufactures of SiR, such as improving material treatment, adding treeing inhibitors or fillers, modifying the material, etc [5,6].Adding nanoparticles has lately drawn great attention with the potential of improving the physical, chemical, mechanical and electrical properties.Polymers such as Polyamide (PA), polyethylene (PE) and epoxy resin, etc. combined with nanoparticles including MgO, TiO2, and, SiO2, etc. have been extensively studied [7][8][9][10][11][12][13][14].Jamil et al examined the electrical tree propagation of SiR filled with organomontmorillonite (OMMT) and found that the nanoparticles were able to slow down the progression of the electrical treeing [15].
Hosier et al studied on low-density polyethylene (LDPE) polymer which used silicon dioxide (SiO2) as its filler and compared it with silicon nitride (Si3N4) as the filler.It stated Si3N4 provided potential pluses over an oxide based system through a reduction of surface hydroxyl groups and leading potentially to a composite with dielectric properties that are considerably less influenced by absorbed water or environmental effect.They found that nitride composites shows improved breakdown strength over oxide composites in all conditions they tested which are dry, ambient and wet [16,17].In spite of that improvement shown by Si3N4 as a filler, there are no researches have been carried out on the electrical tree performances associated with PD in SiR added with Si3N4 nanoparticles.Therefore, this paper presents electrical treeing associated with PD studies in SiR added with Si3N4 to suppress the growth of electrical treeing.In addition, the propagation length of the electrical trees was studied in order to investigate the performance of the SiR/Si3N4 nanocomposites as well as the structure and the growth speed of trees and the electrical tree characteristics collectively discussed.
RESEARCH METHOD 2.1. Material used
The host polymer material is Sylgard 184 Silicone Elastomer which low in viscosity with dielectric strength, tensile strength and tears strength being 24 kV/mm, 6.2 MPa and 2.7kN/m respectively.The silicon dioxide is supplied by Sigma Aldrich which is was fumed silicon dioxide with an average size of 12 nm.Silicon nitride is supplied by NanoAmor with average size of 15-30 nm.Hardener used is dimethyl methylhydrogen siloxane (DMS) which mixed with polymer in the ratio of 10: 1 (silicone rubber: hardener) [18].
Preparation of silicone rubber nanocomposites
The silicone rubber (SiR) Sylgard 184 was used in form of leaf-like specimen.The needle-plane electrode geometry was engaged for initiation and propagation of electrical treeing.The gap distance between the needle tip and the plane electrode was adjusted to 1 mm.Silicon dioxide (SiO2) and silicon nitride (Si3N4) nanoparticles were dried at 100 °C for 24 hours using conventional vacuum oven to remove moisture.After preconditioning of nanofillers, it was mixed with the SiR varies by 1, 3 and 5 weight percentage (wt%) using magnetic stirrer at room temperature for 30 minutes at 60 rpm.Then, the nanofillers were dispersed using an ultrasonicator to obtain a homogeneous dispersion for 1 hour [19].After that, the SiR nanocomposite compound were mixed with its hardener (ratio 10:1) for 30 minutes at 125 rpm using a magnetic stirrer at room temperature.The specimen was cured at 100 °C for 45 minutes.The specimen was prepared in form of leaf-like as shown in Figure 1.
Test sample preparation
To study the electrical treeing and its partial discharge (PD), online monitoring system was developed in this work which consists of a stereomicroscope, an oscilloscope, a personal computer, and a charge-coupled device (CCD) camera as shown in Figure 2. The system consisted of an Olympus SZX16 Research stereo microscope that equipped with auxiliary DP 26 Olympus CCD camera with 115x magnification capability which sufficient to capture magnified images of electrical tree propagation.Measuring impedance was used to detect the PD pulses together with the Coupling-capacitor which is the voltage divider so the voltage does not rise on the impedance of the PD [20].The purpose of this procedure was to observe electrical treeing growth optically and its associated PD at room temperature.
Figure 2. Schematic diagram of experimental setup for electrical treeing studies
The experiments were conducted to record the tree growth and PD by applying a constant AC voltage of 10 kV and 12 kV at 50 Hz which the voltage is monitored using oscilloscope.The samples were placed inside an acrylic cell containing silicone oil to avert surface flashover.Tree growth was constantly observed using the specially established method.The microscope and the CCD camera were interfaced through a computer.The images of electrical treeing at test voltage were captured and recorded.During electrical tree growth, the real time images of electrical tree were taken using CCD camera fixed at stereo microscope with the assist of CellSens digital imaging software.The PD reading was continually monitored by PD detector and oscilloscope which then recorded to the computer by LabVIEW programme.
RESULTS AND ANALYSIS
The tree initiation time, tree bridging time, tree propagation length and partial discharge (PD) of the investigated nanocomposite samples were analysed.Figure 3 shows the photograph of the electrical tree captured by the microscope at different stages.
Tree initiation time
The tree initiation time (Ti) of an electrical tree is the time at which small, observable trees initiate at the needle tip.Figures 4(a) & 4(b) show the Ti of the electrical trees of SiR, with nanofillers of SiO2 and Si3N4 that varies by 0, 1, 3, and 5 wt%.Under 10 kV injection, the average Ti for unfilled SiR was 7.34 s, while the average Ti for SiR/SiO2 were 6.04 s, 5.68 s, and 8.99 s for 1, 3, and 5 wt%, respectively; the SiR/Si3N4 showed a faster Ti of 5.28 s, 5.68 s, and 7.57 s.Meanwhile, with 12 kV injection, the average Ti for pure SiR was 6.86 s, while the average Ti for SiR/SiO2 were 6.86 s, 6.75 s, and 10.65 s for 1, 3, and 5 wt%, respectively; the SiR/Si3N4 showed a faster Ti of 6.93 s, 6.04 s, and 7.10 s respectively.Results from this statistical analysis show that for the Ti, only a small difference exists between the pure and two different nanofiller types.Most of the initiation times are only between 5 to 12 seconds.The result shows that the tree initiation times of the nanocomposites varied based on the nanofillers type and filler loading used.There is an increment in the initiation time for the 5 wt% loading sample; however, the difference is too small.This result is different with the result from Jamil et al.'s work; which they showed that even small amounts of added nanofiller led to significant improvements in tree initiation times as the nanocomposites with nanofillers took a long time to initiate compared to unfilled SiR.This finding difference occur as the current study use a shorter distance of 1 mm between the high-voltage and ground compared to the previous study that use a longer distance which is 2 mm.Generally, the nanocomposites played a role in prolonging tree initiation times compared to unfilled polymer, because the trapped charges needed higher energy to be extracted from the polymer; more time was thus required for the tree to initiate within the nanocomposites [21].However, if the distance between the high-voltage and ground is too short, energy needed for the trapped charges to be extracted from the polymer is lower; thus the tree is easier to initiate within the nanocomposites.
Tree propagation length
Figure 5 shows the result of electrical tree performance for the SiR nanocomposites with different nanofillers at 1, 3, and 5 wt% filler loading.The unfilled SiR showed the fastest propagation rate, followed by SiR/SiO2 nanocomposites.SiR/Si3N4 exhibited the slowest propagation time.Similar results were observed for the rest of the samples.It can be seen from Figure 5 that the tree propagation increased with an increase in nanofillers loading.These results are in line with other studies that have been conducted in which higher filler concentrations led to densely packed structures for resisting electrical tree progression.The tree channels needed to propagate through the interfaces of the nanofiller in the packed structures of the polymer, which resulted in longer propagation times [11,[22][23][24].From Figure 5, it is suggested that Si3N4 nanofillers has a strong ability to slow down the electrical tree growth.This result agrees with Hosier et al.'s work, in which Polyethylene/nitride nanocomposites exhibited higher breakdown voltage, thus indicating that nitride nanocomposites has ability in resistance against electrical treeing [16].
Tree bridging time
Figure 6 shows the average tree bridging time (Tb) for Pure, SiR/SiO2, and SiR/Si3N4 with different nanofiller loadings.Tb is the time taken for the tree to reach the ground electrode, which in our study was at 1000 μm from the high-voltage electrode.It is confirmed that both SiR/SiO2 and SiR/Si3N4 nanocomposites improved the tree bridging time compared to unfilled SiR.From Figure 6 (a), the unfilled SiR recorded the shortest Tb result, followed by SiR/SiO2 and SiR/Si3N4 nanocomposites.The average Tb of the unfilled SiR was 36.45 s, while the Tb for SiR/SiO2 nanocomposite were 66.74 s, 159.04 s, and 211.34 s for 1, 3, and 5 wt%, respectively.SiR/Si3N4 nanocomposites showed the longest Tb of 69.30 s, 186.97 s, and 240.45 s for the same three wt%, respectively.From Figure 6 (b), the 12 kV injection show a same pattern of Tb as the 10 kV injection.The unfilled SiR recorded the shortest Tb result, followed by SiR/SiO2 and SiR/Si3N4 nanocomposites.The average Tb of the unfilled SiR was 48.57s, while the Tb for SiR/SiO2 nanocomposite were 35.26 s, 153.72 s, and 301.04 s for 1, 3, and 5 wt%, respectively.SiR/Si3N4 nanocomposites showed the longest Tb of 60.83 s, 237.14 s, and 516.88 s for the same three wt%, respectively.
From these results, it is clear that the Tb of electrical treeing increased with an increase in the nanofiller loading.SiR/Si3N4 nanocomposites showed the longest Tb when compared to pure SiR and SiR/SiO2 nanocomposites.The result shows that the tree bridging time of the nanocomposites is also dependent on filler type and the nanofiller loading.Nanocomposites with nanofiller take a longer time to ISSN: 2088-8708 Electrical treeing and partial discharge characteristics of silicone rubber filled with… (A.H. M. Nasib) 1687 break down compared to unfilled SiR.In addition, the Tb increased with an increase in nanofiller concentration.Overall, this result showed that SiR/Si3N4 nanocomposites has a longer breakdown time than the other nanocomposites.This result agrees with Xu et al.'s work, in which Polypropylene/nitride nanocomposites exhibited higher breakdown voltage and longer endurance under constant electrical stress [25].
Partial discharge patterns
In this study, it is discovered that when the tree structure grows longer and more compound, the intensity and frequency of partial discharge (PD) events are greater.Table 1 and Table 2 show phase resolved partial discharge (PRPD) patterns recorded along the electrical treeing growth with each dots representing single PD event.The PD activities occur generally only in the 1st quadrant (phases of 0° to 90°) and 3rd quadrant (phases of 180° to 270°), where the AC voltage rises to the positive or negative value respectively [26].
Table 1.PRPD patterns on three following AC cycles after its first PD event for a tree grown at 10 kV on the three type of samples From the PRPD pattern in Table 1 and Table 2, it is noted that the intensity and frequency of PD events increased by cycle.For example, as shown in the Table 1 from the pure SiR sample, the 1st AC cycle shows only a few level discharge which is lower than 600 pC, then it builds up to greater than 600 pC in the 2nd AC cycle and finally up to more than 1000 pC in the 3rd AC cycle as the tree approaches the ground.Similar behaviour can also be seen in other samples of SiR, SiR/SiO2 and SiR/Si3N4 nanocomposites as shown in Table 1 and Table 2. From these patterns, it is certain that PD activity are related to the extent of ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1682 -1692 1690 tree growth.Thus, the behaviour of tree growth in the later stage that can be shown by a high level of PD activity is potentially to be applied in plant use which is to identify the presence of damage of the insulation which is caused by the electrical treeing [26].Moreover, the PD magnitude of 5wt% of silicone rubber nanocomposite with 5wt% showing similar pattern behaviours interestingly but the magnitude shows the lowest PD magnitude among other silicone rubber nanocomposites with 1wt% and 3wt%.This PD behaviours are linked with the electrical tree suppression behaviours as the Si3N4 with the appropriate amount of 5wt% has contributed to the slowest propagation of treeing and the lowest PD magnitude.Therefore, this study has shown that the Si3N4 nanofillers can be effective that the SiO2 nanofillers in resisting the growth of electrical treeing in silicone rubber nanocomposites.
CONCLUSION
Electrical treeing and partial discharge (PD) studies on silicon nitride (Si3N4) and silicon dioxide (SiO2) nanofillers added to silicone rubber (SiR) focusing on electrical treeing investigation has been presented in this paper.As a result, the following conclusions were obtained; the presence of SiO2 and Si3N4 nanofillers in SiR had slowed down the growth of electrical treeing thereby acting as electrical tree retardants.The increase in filler concentration has also resulted in better electrical treeing suppression performance.Furthermore, the improved performance of SiR/ 5 wt% Si3N4 nanocomposite was observed in this study which that it has the best improvement on the electrical tree propagation time and tree growth rates compared to unfilled SiR and SiR/SiO2 nanocomposite.In light of this, it can be concluded that Si3N4 nanofiller has a good potential to be used as an additive in polymeric insulating material to retard the electrical tree growth in insulation.Then, it is found that the associated PD activities intensify from the treeing initiation until the bridging.The relationship between PD activities and the electrical tree growth conclude that it is handy for diagnosis the unseen electrical trees on related high-voltage insulation material by just using PD detection method ISSN: 2088-8708
Figure 1 .
Figure 1.Configuration of leaf like specimen (a) top view (b) side view
Figure 4 .
Figure 4. Tree initiation time for Pure, SiR/SiO2, and SiR/Si3N4 nanocomposites with varies loading levels at voltage of (a) 10 kV and (b) 12 kV
Figure 5 .
Figure 5. Tree propagation lengths across nanocomposites with varies loading levels at voltage of (a) 10 kV and (b) 12 kV
Figure 6 .
Figure 6.Tree bridging time for Pure, SiR/SiO2, and SiR/Si3N4 nanocomposites with varies loading levels at voltage of (a) 10 kV and (b) 12 kV
Table 1 .
PRPD patterns on three following AC cycles after its first PD event for a tree grown at 10 kV on the three type of samples (continue) Int J Elec & Comp Eng, Vol. 10, No. 2, April 2020 : 1682 -1692 1688
Table 2 .
PRPD patterns on three following AC cycles after its first PD event for a tree grown at 12 kV on the three type of samples Electrical treeing and partial discharge characteristics of silicone rubber filled with… (A.H. M. Nasib) 1689
Table 2 .
PRPD patterns on three following AC cycles after its first PD event for a tree grown at 12 kV on the three type of samples (continue)
|
2020-03-12T10:41:24.717Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f63272bace32d6596a69a2d41538ecd2468c2f71",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.11591/ijece.v10i2.pp1682-1692",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5394625e5660df9c8b010885b35f9f0fe76a0865",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
14125414
|
pes2o/s2orc
|
v3-fos-license
|
Quantum Monte Carlo Study of a Positron in an Electron Gas
Quantum Monte Carlo calculations of the relaxation energy, pair-correlation function, and annihilating-pair momentum density are presented for a positron immersed in a homogeneous electron gas. We find smaller relaxation energies and contact pair-correlation functions in the important low-density regime than predicted by earlier studies. Our annihilating-pair momentum densities have almost zero weight above the Fermi momentum due to the cancellation of electron-electron and electron-positron correlation effects.
Electron-positron annihilation underlies both medical imaging with positron emission tomography (PET) and studies of materials using positron annihilation spectroscopy (PAS) [1]. Positrons entering a material rapidly thermalize and the majority annihilate with oppositespin electrons to yield pairs of photons at energies close to 0.511 MeV. In a PET scan, positrons are emitted by radionuclides in biologically active tracer molecules and the resulting annihilation radiation is measured to image the tracer concentration. The interaction of low-energy positrons with molecules is therefore of substantial experimental and theoretical interest [2]. PAS is used to investigate microstructures in metals, alloys, semiconductors, insulators [1], polymers [3], and nanoporous materials [4]. Positrons are repelled by the positively charged nuclei and tend to become trapped in voids within the material. The positron lifetime is measured as the interval between the detection of a photon emitted in the β + radioactive decay that produces the positron and the detection of the annihilation radiation [1]. The lifetime is characteristic of the region in which the positron settles, and PAS is a sensitive, nondestructive technique for characterizing the size, location, and concentration of voids in materials. Measuring the Doppler broadening of the annihilation radiation or the angular correlation between the two 0.511 MeV photons yields information about the momentum density (MD) of the electrons in the presence of the positron. These techniques may be used to investigate the Fermi surfaces of metals [5].
The aim of PAS experiments is to investigate a host material without the changes induced by the positron. The positron is, however, an invasive probe which polarizes the electronic states of the material. Disentangling the properties of the host from the changes induced by the positron is a major theoretical challenge. Positrons in condensed matter may be modeled with two-component density functional theory (DFT) [6], in which the correlations are described by a functional of the electron and positron density components. Within the local density approximation (LDA), this functional is obtained from the difference ∆Ω between the energy of a homogeneous electron gas (HEG) with and without an immersed positron. ∆Ω is known as the relaxation energy, and is equal to the electron-positron correlation energy.
Two-component DFT gives reasonable electron and positron densities, but the DFT orbitals do not describe electron-positron correlation properly [6,7]. The electron-positron pair-correlation function (PCF) g(r) and the annihilating-pair momentum density (APMD) ρ(p) constructed from the DFT orbitals are therefore poor. The contact PCF g(0) is particularly important because it determines the annihilation rate λ = 3g(0)/(4c 3 r 3 s ) [1] for a positron immersed in a paramagnetic HEG, where r s is the electron density parameter and c is the speed of light in vacuo [8]. If the electron and positron motions were uncorrelated g(0) would be unity, but the strong correlation leads to much larger values, particularly at low densities, where an electron-positron bound state (positronium or Ps) or even an electronelectron-positron bound state (Ps − ) may be formed.
We have used the variational and diffusion quantum Monte Carlo (VMC and DMC) methods [9,10] as implemented in the casino code [11] to study a single positron in a HEG. Fermionic antisymmetry is imposed via the fixed-node approximation, in which the nodal surface is constrained to equal that of a trial wave function. We used Slater-Jastrow (SJ) and Slater-Jastrow-backflow (SJB) trial wave functions [12,13]. The latter go be-yond the single-particle SJ nodal surface by replacing the particle coordinates in the Slater determinants by "quasiparticle coordinates." SJB wave functions give the highest accuracy obtained to date for the HEG [12,13]. We also tested two types of orbitals: (i) plane-wave orbitals for each particle and (ii) orbitals which describe the pairing between the electrons and positron. The pairing orbitals were obtained from mean-field calculations performed in the reference frame of the positron, so the orbitals are functions of the separation of an electron and the positron [14]. Within this impurity-frame DFT (IF-DFT) method, the pairing orbitals describe the electronpositron correlation quite well on their own [14] and give a different nodal surface from the plane-wave orbitals. (NB, our QMC calculations were performed in the laboratory frame.) The four wave-function forms used are: where R denotes the positions of all the particles, r ↑ and r ↓ denote the positions of up-and down-spin electrons, respectively, r p is the positron position, and [· · · ] denotes a Slater determinant. The Jastrow exponent J(R) [15] and the backflow displacement ξ(R) [13] contain parameters that were optimized separately for each wave function and system. The Jastrow exponents were first optimized using the efficient VMC variance-minimization scheme of Ref. 16, and then all the parameters (including the backflow parameters) were optimized together using the VMC energy-minimization scheme of Ref. 17. The pairing orbitals {φ i } were represented using B-spline functions on a real-space grid [18]. The electron-positron cusp condition was enforced on the pairing orbitals for wave function Ψ SJ pair [19,20]; for the other three wave functions, the cusp conditions were imposed via the Jastrow factor. In all our calculations the simulation-cell Bloch vector [21] was chosen to be k s = 0.
Tests at high (r s = 1) and low (r s = 8) electron densities show that the qualitative features of the variations in ∆Ω, g(r), and ρ(p) with r s are the same for each of the four wave functions of Eq. (1). However, as shown in the auxiliary material [22], we obtained lower VMC and DMC energies with the SJB wave functions (Ψ SJB PW and Ψ SJB pair ) than the SJ ones (Ψ SJ PW and Ψ SJ pair ), and therefore we used SJB wave functions to obtain all our main results. The pairing orbitals give lower SJB-VMC energies than the plane-wave orbitals, but the SJB-DMC energies with the plane-wave and pairing orbitals are almost identical. The lack of sensitivity to the orbitals used, and hence the nodal surface, suggests that the DMC energies are highly accurate. The energies reported in this paper are from DMC calculations using wave function Ψ SJB PW . Such calculations are considerably less expensive than calculations using Ψ SJB pair due to (i) the lower energy variance achieved with Ψ SJB PW [22] and (ii) the fact that plane-wave orbitals are cheaper to evaluate. The DMC energies were extrapolated to zero time step. Our production DMC calculations were performed in cells containing N = 54 electrons. Tests of convergence with respect to system size up to N = 114 electrons are described in the auxiliary material [22]. The cell volume was chosen to be (N − 1) × (4/3)πr 3 s , so that the electron density far from the positron was correct. IF-DFT calculations [14] suggest that finite-size effects due to the interaction of images of the positron are negligible for N ≥ 54 electrons.
Our DMC relaxation energies are plotted in Fig. 1 and are well-fitted by the form where A −1 = −0.260361, A 0 = −0.261762, A 1 = 0.00375534, B 1 = 0.113718, and B 2 = 0.0270912. Equation (2) tends to the correct low-density limit of the energy of the Ps − ion [23]. Equation (2) does not yield the exact high-density behavior of the random phase approximation (RPA), although this is only relevant for r s < 0.1 [24]. VMC energies for a positron in a HEG have been reported previously [25], but we have used superior trial wave functions and have obtained very different results. At high densities our relaxation energies are similar to those of Lantto [26], but at lower densities we obtain smaller values. The SJB-DMC and IF-DFT results [14] and the data of Boroński and Stachowiak [27] show similar behavior with r s , while the Boroński-Nieminen fit [6] to the data of Ref. 28 is markedly different. Boroński and Nieminen's [6] expression for ∆Ω(r s ) is widely used in two-component DFT calculations, but our study suggests it is not very accurate and should be replaced by Eq. (2). We calculated the APMD within VMC using optimized SJB trial wave functions with pairing orbitals (Ψ SJB pair ), because these give lower VMC energies than plane-wave orbitals (Ψ SJB PW ). These calculations were performed by constraining an electron and the positron to lie on top of one another throughout the simulation [22]. APMDs at different densities are plotted in Fig. 2, with the normalization chosen such that ∞ 0 4πp 2 ρ(p) dp = (4/3)πk 3 F . Our results clearly show the enhancement of the APMD below the Fermi momentum predicted by Kahana [30], but our data differ quantitatively from previous results [14,27,30]. Our VMC data have almost no weight above the Fermi momentum over the entire density range studied, even though the weight in the MD above k F in the HEG is substantial at low densities. For example, we find that the APMD immediately above k F is roughly 10% of the value for the HEG at r s = 1 and 3% at r s = 8. Arponen & Pajanne [28] Lantto [26] Boronski & Stachowiak [29] FIG. 1: (Color online) Relaxation energy against density parameter from our SJB-DMC calculations and other studies [14,26,28,29], relative to the Boroński-Nieminen expression, ∆ΩBN [6] (horizontal dashed line).
Suppression of the weight in the APMD above k F was demonstrated theoretically by Carbotte and Kahana [31], but our study gives a more detailed and accurate picture. We investigated the weight above k F using VMC with the wave function Ψ SJB PW by selectively eliminating interparticle correlations. Neglecting electron-electron and electron-positron correlations gives the familiar "top hat" MD of the noninteracting system. Calculations with the electron-positron terms removed give an APMD indistinguishable from the MD of the HEG, with a tail above k F . Calculations including electron-positron correlation but neglecting electron-electron correlation show Kahana enhancement below k F and a tail above k F . When, however, both electron-electron and electron-positron correlations are included, the tail above k F is largely suppressed, as shown in the lower panel of Fig. 2.
The suppression of the tail in the APMD can be explained by examining the behavior of the two-body terms in the Jastrow exponent. (For simplicity, we consider the Ψ SJ PW wave function in the following discussion.) The Jastrow exponent J(R) is the sum of electron-electron [u ↑↑ (r) and u ↑↓ (r), where the arrows indicate spins] and electron-positron [u ep (r)] terms. If one assumes that then the APMD has exactly zero weight above k F , as shown in the auxiliary material [22]. The RPA (linear response theory) shows that Eq. (3) holds at large r and the Kato cusp conditions force the gradients of u ↑↓ (r) and u ep (r) to satisfy Eq. (3) at r = 0. The cusp conditions for parallel and antiparallel spin electrons are different and therefore u ↑↓ (r) and u ↑↑ (r) must differ at small r, but antisymmetry ensures that the probability of parallel-spin electrons being closer than r s is small. As shown in the [30] and Stachowiak [27], respectively. Bottom: APMDs for the positronin-HEG and the HEG at rs = 8 and N = 54 electrons, calculated using Ψ SJB PW .
auxiliary material, plots of the terms in the Jastrow exponent demonstrate the approximate validity of Eq. (3). We calculated the PCFs within VMC and DMC using Ψ SJB PW wave functions, because these give the same results as pairing orbitals but the calculations are much cheaper [22]. The final results were evaluated by extrapolated estimation (twice the DMC PCF minus the VMC PCF) [32], in order to eliminate the leading-order errors. In Fig. 3, the electron-positron contact PCF g(0) is plotted relative to the Boroński-Nieminen form [29], which is a fit to the data of Stachowiak and Lach [33]. Our contact PCF data are well-represented by g(0) = 1 + 1.23r s + a 3/2 r 3/2 s + a 2 r 2 s + a 7/3 r 7/3 s + a 8/3 r 8/3 where a 3/2 = −3.38208, a 2 = 8.6957, a 7/3 = −7.37037, and a 8/3 = 1.75648. Equation (4) satisfies the highdensity (RPA) [24] and low-density (Ps − ) limiting behaviors [23]. Our full data for g(r) are given in the auxiliary material [22]. The IF-DFT data follow the extrapolated SJB data quite well, while the other many-body calculations give somewhat larger values of g(r) at low densities. In the density range r s = 5-8 a.u., our values of g(0) are approximately 9% smaller than those given by the Boroński-Nieminen expression [6]. The local increase of the electron density around the positron caused by their mutual attraction is modeled in two-component DFT using an "enhancement factor" based on data for g(0). Using our smaller values of g(0) would reduce the enhancement factor and hence the overestimation of annihilation rates obtained with the positronic LDA [34]. Arponen & Pajanne [28] Lantto [26] Stachowiak & Lach [33] Apaja et al. [35] Harju et al. [36] FIG. 3: (Color online) Deviation of the contact PCF g(0) from the form gBN(0) of Boroński and Nieminen [29] together with other results in the literature [14,26,28,33,35,36].
In conclusion, our results are the most accurate obtained so far for a positron in a HEG. Our data for ∆Ω are sufficient to define the energy functional for a two-component positronic DFT within the LDA. They would also be useful in developing semilocal [37] or other functionals. Our PCF data give a smaller enhancement factor than the standard Boroński-Nieminen expression [6]. Our APMDs have very little weight above k F because of the cancellation of electron-electron and electron-positron correlation effects. We have derived an exact result relating Eq. (3) to the complete absence of weight in the APMD ρ(p) forp > k F , which is useful in understanding this effect.
We acknowledge financial support from the UK Engineering and Physical Sciences Research Council (EP-SRC). Computer resources were provided by the Cambridge High-Performance Computing Facility and the Lancaster High-End Computing cluster.
|
2011-04-28T16:43:12.000Z
|
2011-04-28T00:00:00.000
|
{
"year": 2011,
"sha1": "9bd9777b1c28f92b9b5559f5d0b538a71fb3585d",
"oa_license": null,
"oa_url": "https://eprints.lancs.ac.uk/id/eprint/52369/1/e207402.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9bd9777b1c28f92b9b5559f5d0b538a71fb3585d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
238662310
|
pes2o/s2orc
|
v3-fos-license
|
Transformation and endurance of Indigenous hunting: Kadazandusun-Murut bearded pig hunting practices amidst oil palm expansion and urbanization in Sabah, Malaysia
1. Land-use change and political– economic shifts have shaped hunting patterns globally, even as traditional hunting practices endure across many local socio-cultural contexts. The widespread expansion of oil palm cultivation, and associated urbanization, alters land-use patterns, ecological processes, economic relationships, access to land and social practices. 2. In particular, we focus on the socio-ecological dynamics between Kadazandusun-Murut (KDM) hunters in Sabah, Malaysian Borneo, and bearded pigs ( Sus barba-tus ; Malay: ‘babi hutan’), the favoured game animal for non-Muslim communities throughout much of Borneo. We conducted 38 semi-structured interviews spanning over 50 hr with bearded pig hunters, asking them about contemporary hunting practices and motivations, changes in hunting practices, changes in pig behaviour, and patterns of animal protein consumption in village and urban contexts. 3. Amidst widespread land-use change, primarily driven by oil palm expansion, respondents reported substantially different characteristics of hunting in oil palm plantations as compared to hunting in forests. Additionally, 17 of 38 hunters— including 71% (10/14) of hunters who started hunting before 1985, compared to 26% (6/23) of hunters who started hunting in 1985 or later— mentioned that bearded pigs are behaving in a more skittish or fearful way as compared to the past. Our respondents also reported reductions in hunting frequency and wild meat consumption in urban contexts as compared to rural contexts. 4. However, despite these substantial changes in hunting and dietary practices, numerous KDM hunting motivations, hunting techniques and socio-cultural traditions have endured over the last several decades. For some, bearded pig meat remains deeply tied to food provision, gifting and sharing customs, and cultural components of celebrations and feasts.
| INTRODUC TI ON
Hunting has been called 'the master behaviour pattern of the human species … which puts motion and direction into the diagram of [hu] man's morphology, technology, social organization, and ecological relations …' (Laughlin, 1968).In addition to the provision of meat, a typical hunting event includes, among other behaviours, searching for prey, pursuing animals, killing and butchering one or more animals, transporting carcasses, distributing meat among households or markets and communicating ecological information throughout and following the hunt (Laughlin, 1968;Puri, 2005).Correspondingly, a great number of physical, cultural, social and ecological dynamics are linked to hunting practices: hunting is, in short, one of the most fundamental and enduring of human-wildlife relationships.
Land-use change and hunting are intimately linked.For example, land conversion increases access to wildlife habitats and often leads to dramatic and unsustainable levels of hunting (e.g.Abernethy et al., 2013;Harrison et al., 2016;Parry et al., 2007).Furthermore, land conversion has been shown to influence hunting practices and techniques in a variety of socio-cultural contexts (Luskin et al., 2014;Wightman et al., 2002).The many and varied modes through which land-use changes interact with hunting practices call for greater understanding of the links between socio-ecological systems, social practices, food security and the sustainability of wildlife populations (Bassett, 2005;Brashares et al., 2014).Drawing on a case study of these integrated dynamics, we investigate the ways that oil palm expansion, urbanization and ancillary socio-cultural factors have been tied to the transformation and endurance of bearded pig hunting practices in Sabah, Malaysia.
| Historical and contemporary bearded pig hunting practices in Borneo and Sabah
The bearded pig (Sus barbatus, Bahasa Melayu-'babi hutan': 'forest pig') is a large, nomadic Suid species native to Sundaland and woven into the socio-ecological fabric of Borneo (Luskin & Ke, 2018;Puri, 2005).Bearded pig hunting is a deeply embedded social practice in many Indigenous communities in Borneo, who have hunted and consumed bearded pigs for over 40,000 years (Harrisson et al., 1961;Medway, 1964).For example, for the Penan Benalui in East Kalimantan, hunting is the most regularly occurring economic activity and a central organizing activity in Penan society (Puri, 2005).Some traditional hunting techniques are also tied to nomadic movements of bearded pigs (e.g.Banks, 1949), which are thought to periodically move long distances of up to 650 km in large herds of up to 300 individuals (Caldecott et al., 1993;Davies & Payne, 1982;Pfeffer, 1959).Bearded pig meat has been shown to account for 54%-97% of wild meat by weight in Indigenous Bornean societies (Bennett et al., 2000;Chin, 2001;Puri, 2005), for whom wild meat can contribute to as much as 36% of meals (Bennett et al., 2000).Thus, the bearded pig is the most heavily consumed terrestrial game animal for Indigenous, non-Muslim communities throughout Borneo, and is also widely considered the favourite type of wild meat among many of these communities (Bennett et al., 2000;Chin, 2001;Janowski, 2014;Puri, 2005).
Bearded pig hunting also holds significant implications for recreation, gift-giving and social practices in many Indigenous Bornean communities (Harrisson, 1965;Janowski, 2014;Wadley & Colfer, 2004).More broadly within Malaysia, pigs and pig hunting are situated at intersections of religion, ethnic identity and geography.
In Malaysia, a multicultural society politically controlled by ethnic Malays, one of the many socio-religious delineations between Malay Muslim elites and other ethno-religious groups is the consumption of pig meat: for religious reasons, many Malay Muslims find pigs and pork highly objectionable-to the point that 'babi' ('pig') is an insult (Yusof, 2012).In contrast, other groups, including ethnic Chinese minorities, consume pork in large quantities (Neo, 2011).The prominence of religious food practices has a dramatic influence on patterns of pork consumption in Malaysia (Chua, 2012), to the extent that a 'pig line' has even been described in Sarawak, delineating predominantly Muslim coastal fishing communities from primarily 5. Oil palm has cultivated new hunting practices that differ from those in forests, and has potentially contributed to altered bearded pig behaviour due to increased hunting accessibility.Together, oil palm and urbanization are helping reshape the KDMbearded pig socio-ecological system.In light of these reshaped connections, we recommend location-specific management approaches that ensure fair access to the dietary and social benefits of bearded pig hunting while preserving the critical conservation needs of bearded pig populations and habitat.These twin goals are particularly urgent given the confirmed outbreak of African Swine Fever (ASF), and mass deaths of domestic pigs and wild bearded pigs, in Sabah and Kalimantan in 2021.
K E Y W O R D S
African Swine Fever, Borneo, coupled human and natural systems, environmental governance, land-use change, socio-ecological systems, Southeast Asia, wildlife management non-Muslim inland communities who are nutritionally dependent on wild pig meat (Bolton et al., 1972).Similarly, ethno-religious dynamics shape hunting practices and influence which species are targeted for hunting in Indonesian Borneo (Wadley et al., 1997).
Pig hunting practices take place within an environmental context of widespread deforestation and agricultural expansion (Gaveau et al., 2014;Wong et al., 2012).Luskin and Ke (2019) estimated significant (20% or more) habitat loss and range reduction from 1990 to 2010 in each of the three bearded pig range locations: Peninsular Malaysia, Sumatra and Borneo.This decline was driven by agriculture-related habitat fragmentation (primarily due to oil palm and rubber plantations), leading to the recent re-listing of the bearded pig as a Vulnerable species in the International Union for Conservation of Nature and Natural Resources Red List (Luskin, Ke, et al., 2017).In addition to contributing to habitat loss, oil palm plantations have reshaped bearded pig ecology by reducing the area available for some behaviours (e.g.limited wallowing and nesting sites in plantations), altering demographics (e.g.increasing the proportion of young pigs in plantations) and changing activity patterns (e.g.shifting pigs to nocturnal activity patterns in plantations) (Davison et al., 2019;Love et al., 2018).Bearded pigs also receive food subsidies from crop-raiding within oil palm plantations (Davison et al., 2019;Love et al., 2018), and it has been hypothesized that this behaviour could potentially increase wild pig populations near oil palm (Davison et al., 2019;Love et al., 2018;Luskin, Brashares, et al., 2017).These findings raise questions about how bearded pig responses to forest-oil palm mosaics might affect hunting practices, and about how bearded pig hunting should be appropriately managed for long-term bearded pig conservation and socio-ecological sustainability.
Across the bearded pig range, pig hunting management is regulated by a heterogeneous matrix of policies.Hunting of the species is permitted in some form across bearded pig range countries (Indonesia, Malaysia and Brunei), with restrictions varying by jurisdiction and including measures such as hunting permits, nohunting protected areas and native hunting clauses (Brunei Wildlife Protection Act 1984, Act of the Republic of Indonesia No. 5 of 1990 concerning Conservation of Living Resources and their Ecosystems, 1990, Laws of Sarawak, Wild Life Protection Ordinance, 1998;Wildlife Conservation Enactment 1997).Law enforcement capacity also varies by region (Bennett et al., 2000;Lintangah et al., 2015;Luskin et al., 2014).
In Sabah, Malaysia, it is legal to hunt bearded pigs and sell the meat with appropriate licenses from the Sabah Wildlife Department (Wildlife Conservation Enactment 1997).A sport hunting licence for bearded pig costs 5.00 MYR (~1.22 USD) per animal and a commercial hunting licence for bearded pig costs 50.00 MYR (~12.17USD) per animal (Wildlife Conservation Enactment 1997).
Hunting of bearded pigs in Sabah is widespread in many rural areas, and bearded pig meat remains an important food resource for many communities (Bennett et al., 2000;Mojiol et al., 2013).
However, a recent African Swine Fever (ASF) outbreak in Sabah has raised concerns for bearded pig populations as well as for local communities (Chan, 2021;The Star, 2021).First reported at the end of 2020, ASF has spread rapidly throughout numerous forests and districts in Sabah over the first half of 2021 (S.Nathan, pers. comm.).ASF is a deadly virus with case fatalities in domestic and wild pigs ranging from 47.7% to 100% (FAO, 2021a;Liu et al., 2020).
To mitigate the spread of ASF and due to movement control orders related to the COVID-19 pandemic, the government froze hunting licences in Sabah in early 2021 (Chan, 2021;The Borneo Post, 2021;The Star, 2021).In mid-2021, there were also reports of confirmed ASF cases as well as mass bearded pig deaths in Kalimantan (Berau Post, 2021;Fadil, 2021), indicating possible spread of ASF through wild bearded pig populations beyond Sabah.
| Economic, environmental and social processes of oil palm expansion in Sabah
Sabah has been on the frontlines of the oil palm boom since the late 20th century.This transformative process is noteworthy for its deep roots in globalized commodity chains, through which oil palm became highly valued as a 'global flex crop' useful for food, fuel and personal care (Alonso-Fradejas et al., 2016).By the 1960s, Borneo had been identified as a major resource frontier, providing more tropical timber than anywhere else in the world by the late 1970s (Brookfield et al., 1995).With timber extraction helping pave the way for oil palm expansion, Malaysia emerged as the global leader in palm oil production in the 1970s (FAO, 2021b).By the early 1980s, oil palm had become Sabah's most important cash crop, fuelled by high profitability and the diversity of commercial applications for palm oil (Bernard & Bissonnette, 2011).Oil palm plantation area in Sabah reached over 1.7 million ha (6,867 sq.miles) by 2015; 68% of this total area was converted to oil palm within 5 years of forest clearance (Gaveau, Sheil, et al., 2016).As of 2015, roughly 24% of Sabah's total land area was covered by oil palm or pulpwood plantations (Gaveau, Sheil, et al., 2016).These large-scale economic and land-use changes resulted in profound shifts in socio-ecological relationships in Sabah.In significant part, Sabah became a particular manifestation of the 'global land grab' in which large tracts of land were allocated to a small number of business, bureaucratic and political elites (Cramb & Curry, 2012).Indeed, some have argued that this socioenvironmental shift represents an extension of colonial legacies of territorialization, with large plantation corporations taking a capitalist role analogous to their imperialist land-control forbearers and shaping labour relations and livelihood options across the state (Bernard & Bissonnette, 2011;Cooke, 2012).While oil palm smallholdings became popular and often profitable options for some Sabahans with access to land (Cooke, 2012), most labour and management in the vast stretches of industrial oil palm plantations began coming from outside of Sabah.For example, by the late 1990s, 95% of workers on Federal Land Development Authority (FELDA) plantations in Sabah were migrants from the Philippines or Indonesia (Bernard & Bissonnette, 2011).As a result, this migrant labour force, consisting of both legal and illegal workers, has become a mainstay of Sabah's plantation economy (Kelly, 2011).
For their part, Sabahans may take administrative (or occasionally labourer) positions within large oil palm companies, own their own oil palm smallholdings, or move to urban areas for relatively wellpaying jobs in manufacturing and retail.For those Sabahans remaining in rural parts of the state, disputes over land allocation and ownership have reduced access to croplands and forests in some areas, weakening traditional forms of food security and restricting accessibility to non-timber forest resources (Bernard & Bissonnette, 2011).Due in large part to the vast areas already gazetted for timber production and oil palm plantations, new land for oil palm 'either has to encroach on claimed but untitled lands on which customary rights have been established or excised from existing government forest reserves' (Cooke, 2012).
| Oil palm expansion, urbanization and bearded pig hunting among Kadazandusun-Murut (KDM) hunters in Sabah
Despite the historical and contemporary prominence of bearded pig hunting and dietary relationships, there has been little published research on these practices and how they have been reshaped by the socio-economic and environmental changes brought about by oil palm expansion.Case studies and syntheses, both regional and global, are needed to elucidate how relationships between human societies and natural resources change in response to factors such as land-use change and political-economic forces (Lambin & Meyfroidt, 2010).In this paper, we argue that the socio-ecological processes of oil palm expansion and urbanization in Sabah have profoundly shaped-and continue to shape-hunting practices within the influential Kadazandusun-Murut ethnic group (or 'KDM', the common shorthand for this community in Sabah).The KDM make up roughly a third of the Bumiputera population (literally translated to 'sons of the land', used in Malaysia to refer to Malays and Indigenous ethnic minority groups) within the state of Sabah, and over 20% of the total population of Sabah (Malaysia Department of Statistics, 2011).Within Sabah, the KDM peoples are considered among the Orang Asal, or Indigenous Peoples of Malaysia.In this study, we investigate the particular ways that KDM-bearded pig hunting practices have been preserved or changed in the face of the environmental, economic and social changes that have come with oil palm expansion and urbanization.Specifically, we interviewed KDM hunters in Sandakan District, Sabah, to infer persistence and change in their hunting practices, perceptions of bearded pig behaviour, meat and fish consumption patterns, hunting motivations, and hunting techniques.We discuss ways our findings shed light on the relationships between oil palm expansion, urbanization and hunting, and we call for biocultural conservation that encompasses KDM social practices as well as long-term management of bearded pig populations.
| Study area
We conducted our study in Sandakan District (5.840415, 118.116757), located along the eastern coast of Sabah, Malaysian Borneo (Figure 1).Sandakan is the third most populous district in Sabah, with a population of 396,290 in the 2010 census (Malaysia Department of Statistics, 2015).Between 2000 and 2010, the population of the district grew by 13.6% (Malaysia Department of Statistics, 2015).Most land area in Sandakan district is covered by industrial plantation agriculture (Gaveau et al., 2014).The Sandakan economy is also supported by numerous factories and industrial uses, including oil terminals, oil refineries, glue factories, a shipyard F I G U R E 1 Situated within Southeast Asia (top), the study area was Sandakan District, Sabah, Malaysian Borneo (bottom).Top map created with package 'mapdata' (Becker & Wilks, 2018).Bottom map created using land cover data from Gaveau, Salim et al. (2016) and with ArcMap version 10.7.1 (Esri Inc, 2019) and wood-based factories (Sabah State Government, 2014).Of the Malaysian citizen population of Sandakan (constituting 63% of the total population), 71% identify as Bumiputera (Malay, Kadazandusun, Bajau, Murut and other Bumiputera), 25% are of Chinese descent, 0.4% are of Indian descent and 3.5% are from additional racial-ethnic groups (Malaysia Department of Statistics, 2015).
| Data collection
We conducted 38 in-depth, semi-structured interviews with Kadazandusun-Murut (KDM) bearded pig hunters in 2019 in Sandakan District (Figure 1).Our interview protocol was approved by the Committee for Protection of Human Subjects at the University of California, Berkeley (Protocol number: 2019-04-12096), by the Sabah Biodiversity Council (Ref.No. JKM/MBS.1000-2/2JLD.9 (59)), and by the Sandakan Municipal Council (Ruj.MPS100-48/001/0000/035).All hunters interviewed were men.Although women in some Bornean communities play significant roles in the various cultural practices associated with bearded pig consumption, 1 we did not encounter any women engaged in hunting over the course of our study.More broadly, hunting is typically associated with men in Indigenous Bornean societies (Alexander & Alexander, 1994;Thambiah, 1997).We defined a 'hunter' as someone who had hunted bearded pigs twice per year or more, on average, for a span of at least 5 years.A hunter did not need to be hunting regularly at the time of the interview to be included in our study.We identified hunters through our existing social and professional networks, and we relied on referral ('snowball') sampling, by which respondents connected us with other hunters.While this strategy did not provide us with a representative pool of the KDM hunting community in Sandakan District, it promoted trust and helped identify a set of highly knowledgeable respondents (e.g.Luskin et al., 2014).
When potential respondents were in a village (kampung) setting, we sought and received permission from the village chief before proceeding with interviews.Before conducting an interview, we asked each participant for his verbal consent to participate in the research.We asked for verbal consent to accommodate respondents who may have felt uncomfortable reading or signing a written consent form.To protect the privacy of respondents, we did not record their names or any audio.
Two (J.B., V.T.J.) or three (D.J.K., J.B. and V.T.J.) authors conducted each interview, primarily in Bahasa Melayu (supplemented only occasionally with English if respondents were comfortable and chose to speak in English).Both primary interviewers (J.B. and V.T.J.) spoke fluent Bahasa Melayu, and one of the primary interviewers (V.T.J.) is a local Sabahan.Each interview lasted from 0.5 to 2.5 hr, and took place in a location chosen by the respondent.
Respondents were normally interviewed individually, but occasionally social norms and relationships led to respondents being more comfortable with an interview in a small group (i.e.2-3 individuals).Our survey consisted of basic demographic information (e.g.age group, home village/city, education level, work information) and questions about their hunting practices (see Supporting Information for interview guide in English and Bahasa Melayu).
We asked hunters to compare their hunting practices in oil palm plantations and forest.We also asked hunters about perceived changes in: their bearded pig hunting practices, the influence of their jobs on hunting, their hunting locations and bearded pig ecology.Respondents were also asked about their hunting motivations, animal protein consumption patterns in village and urban contexts, hunting techniques, hunting narratives and hunting success rates.
Most of the questions asked were open-ended, but we also asked closed questions to gather readily quantifiable information about certain categories.To avoid asking for sensitive information and making our respondents uncomfortable, we did not ask whether they had obtained the appropriate licences for hunting or sale of bearded pig meat.We did not compensate respondents for participating in the study.
To quantify meat and fish consumption patterns, we asked respondents how many times in the previous week they had eaten: bearded pig meat, deer meat, any other kind of wild meat, wild fish from rivers, wild fish from the sea, and domestic chicken, domestic pig, or other domestic meat.We asked respondents to share their consumption patterns for both village (kampung) and city (bandar) settings, as many respondents had spent significant time living in each setting or regularly moved back and forth between each context.To quantify hunting success, we asked respondents how many hunting trips for bearded pig, on average, were successful out of four attempts.
| Respondent characteristics
Hunter ages ranged from 26 to 72 years, with a mean age of 47 years.
Most hunters had attended school until Form 1-5 (corresponding to 13-17 years of age), a few had received their Sijil Pelajaran Malaysia (Malaysia Certificate of Education, equivalent to a US high school degree) and a small minority of respondents had attended university or institute programmes.Respondents worked in a variety of fields, including the oil palm industry (smallholder and industrial), police and government service, the clergy, semi-professional hunting, forestry, farming, rideshare driving and various forms of self-employment.
Twenty-seven out of 36 respondents (75%) said they had worked in oil palm agriculture at some point, whether as smallholders or in industrial oil palm plantation roles.
| Data analysis
To investigate whether hunting practices have changed due to the expansion of oil palm plantations in Sandakan District, we compared hunting techniques used by hunters who started hunting earlier and later in the process of oil palm expansion in Sabah.We calculated the approximate year each hunter began hunting, based on their current age and the age they began hunting.We separated hunters into two categories: those who began hunting before 1985, and those who began in 1985 or later.We chose 1985 for this analysis, as extensive oil palm expansion in the Sandakan district occurred throughout the 1970s, resulting in an oil palm-dominated landscape by the late 1970s and 1980s (Dayang Norwana et al., 2011;Gaveau, Sheil, et al., 2016).To test for differences in hunting techniques between the two categories of hunters, we conducted a Fisher's exact test in R version 3.6.0(R Core Team, 2019).
Qualitative data were analysed via inductive content analysis (Elo & Kyngäs, 2008), in which we started with specific observations of individual hunters and moved to a more general framework of contemporary KDM hunting practices among our respondent pool.
We present our findings as a sequence of themes that emerged from the interviews (e.g.Dhee et al., 2019).We focused our analysis on (a) endurance and transformation of KDM pig hunting and dietary practices; and (b) the specific influences of oil palm expansion and urbanization on the persistence and change in these practices.We present interview excerpts as English translations, with the original Bahasa Melayu quote sometimes included to present respondent insights in their own language and expression.
| Differing hunting practices in forest and oil palm plantations
In response to an open-ended question about whether hunting in the forest is different from hunting in oil palm, hunters reported several distinct characteristics of hunting in each environment (Table 1).
Most prevalent was the perception that hunting in oil palm plantations was easier overall than hunting in forests, for example, because it was generally less tiring than walking in a forest, easier to see or find pigs or more predictable in terms of knowing exact foraging locations preferred by pigs.Hunting in forests was characterized by a number of hunters as being harder overall than hunting in plantations, and involved walking on foot (often for longer distances).For example, Respondent 14 contrasted the two styles of hunting this way: 'In the plantation you know the pig will come eventually-it's only a matter of time', whereas in the forest 'it's not as certain even if you hunt all day long-because you will need to walk and only if you cross paths with it will you get it-if you do, you do'.
Additionally, five respondents noted a difference between the taste of the meat from pigs in oil palm plantations as compared to forest.Three hunters specifically expressed a preference for the taste of pig meat from forest.Respondent 20 commented, 'The pig from the forest is much tastier, it's more fit.If the pig eats oil palm its fat isn't as sweet.It's very rare to meet a pig that's never eaten oil palm'.
| Perceived changes in pig ecology over time
In response to an open-ended question about whether they had noticed any changes in bearded pig behaviour since they had started hunting, more than half of all respondents (20/38) noted some type of pig behaviour change over time (Box 1).In particular, 17 hunters replied that they noticed that pig behaviour had become more skittish, wild or fearful over the years.Among hunters who had started hunting before 1985, 71% (10/14) noted this increased flight response, whereas only 26% (6/23) of hunters who started hunting after 1985 mentioned this behavioural change.Additionally, five hunters noted other pig behaviours (e.g.activity patterns) that they perceived to have changed over time.For example, one hunter hypothesized that pigs change their behaviour in response to the schedule of workers in the plantation, suggesting that the pigs came into the plantation after workers had gone home for the day.Some hunters reported seeing bearded pig eruptions of scores or hundreds of individuals, although these observations were typically made by older hunters.Several hunters in our study described these pig eruptions with awe, fear, excitement or shock.For example, Respondent 5 said: 'I was sitting in a tree when a huge herd of pigs came by.I was so shocked that I didn't even shoot any.I just sat there counting them'.Respondent 15 commented, 'There are so many pigs that all you can do is just stand and stare until they run away'.A few hunters acknowledged that large pig aggregations occurred, but
No. hunters
Harder overall (e.g. more tiring, more variable)
8
Easier overall (e.g. less tiring, more predictable) 9 Hunting on foot 6 More waiting for pigs 5 Walking farther distances 5 Easier to find/see pigs 4 Easier to get more pigs 2 Predictable places pigs come to forage 3 Hunting with a car 2 TA B L E 1 Salient themes of hunting in forest and oil palm plantations mentioned by hunters in response to an open-ended question about the difference between hunting in the two habitat types had not seen large herds or did not know many details about them.
Younger hunters typically had never seen or heard of these large groups or long-distance movements.
| Animal protein consumption patterns in village and urban settings
In village settings, 72% of respondents (n = 32) reported consuming bearded pig weekly or more frequently, 31% of respondents reported consuming bearded pig 2-3 times per week and 22% reported consuming bearded pig four or more times per week (Figure 2).More respondents in village contexts consumed bearded pig meat on a weekly basis than any other animal protein besides domestic chicken (Figure 2).In addition to bearded pig meat, a minority of respondents More respondents in urban settings consumed marine fish, domestic chicken and domestic pork than bearded pig.In cities, only 4.3% of respondents reported consuming other wild meat on a weekly or more frequent basis.
| Hunting declines due to urbanization and other factors
Seven hunters said they hunted less than before due to job commitments, or dynamics related to job opportunities and urban life.
Factors tied to urbanization included job-related time commitments, lack of energy due to work and increased travel distance required to hunt.
For example, Respondent 6, who worked as a contractor in Sandakan, said, 'In the past you'd always go hunt, now there's not enough time to hunt'.Respondent 30 noted, 'When you live in the city there are no good places to hunt'.Respondent 2, a rideshare driver in Sandakan, hunted on days off work, but explained that he hunts 'Less now, there are many estates, the forest is remote and the pigs are far away'.
Hunters also reported hunting declines tied to other factors.
Three hunters specifically mentioned oil palm-associated land-use change, and related dynamics such as the resulting increase of travel time to hunting locations, as a reason for their reduced hunting frequency.Three hunters also referenced the increased difficulty in finding and/or purchasing ammunition as a reason for reduced hunting.
Some hunters were very clear about the importance of bearded
pig meat as a central food source.For example, Respondent 15 said, 'It is the main source of food for people who live in the villages' ('Dia menjadi sumber makanan orang kampung').For some hunters, it was important that hunting bearded pig was a way of life.Respondent 9 said that his father taught him that 'This is our life.We live in the forest; this is our food'.As Respondent 25 put it, 'We cannot leave Selling bearded pig meat for money was cited as a secondary motivation for hunting among a minority of respondents (10 respondents, 27%), followed by respondents citing other motivations (6, 16%).For some hunters who sold bearded pig meat regularly or occasionally, the sale was an important source of income.Hunters generally reported current bearded pig meat prices to be relatively high at roughly 10-15 MYR/kg, in contrast to reported prices of around 3-5 MYR/kg around 10 years ago (much lower than current prices, even when adjusted for inflation).Monthly income from pig hunting was reported to be as high as 5,000 MYR (~1,194 USD) in a good month, substantially higher than wages earned in oil palm plantations.However, respondents expressed mixed perceptions of hunting bearded pig for sale.Some hunters said they never hunted for sale, and felt that selling bearded pig meat was irresponsible because it contributed to pig population declines.Others felt that selling bearded pig meat was unnecessary, or even reprehensible, due to the robust KDM cultural practice of gifting the meat.For example, Respondent 25 captured the sentiment of many KDM hunters towards selling bearded pig meat: 'Don't sell it, if people ask just share it'.('Bukan jual lah, kalau orang minta bagi-bagi lah'.)
| Hunting technique persistence over time
We found no significant difference in hunting techniques between respondents who began hunting before 1985 and those who began in 1985 or later (Fisher's exact test, p > 0.99).Overall, the most popular hunting techniques that respondents had used were (a) on foot with a gun (28 respondents, 83% of respondents) and (b) drive hunts with a gun (25, 75%), although numerous other techniques were also widely used (Figure 4).Hunting with dogs and a spear and hunting with snares were also common among our respondents (Figure 4).
Respondents cited a variety of reasons why they preferred different hunting techniques.For some, hunting location was a major factor in the technique used.For example, hunting on foot with a gun was possible in all habitat types, whereas drive hunts were mentioned
| Regulatory factors influencing contemporary bearded pig hunting practices
Respondents were generally aware of hunting regulations, and knew that permits were required to legally hunt wildlife and sell wild meat.Several hunters shared stories about law enforcement, or referenced permit requirements when explaining their own reasoning about hunting decisions.However, despite their general awareness of the regulatory environment around hunting bearded pig and other species, there was some inconsistency and confusion in understanding permit requirements and hunting regulations.There was also a shared perception that Wildlife Department and Forestry Department officials, among others, were frequently monitoring forest areas for illegal hunting.For example, Respondent 6 said, 'Many of my friends have been fined by the Wildlife Department'.
| D ISCUSS I ON
We found several lines of evidence indicating that important hunting practices have been reshaped by oil palm expansion and urbanization.Our results also show that KDM pig hunting motivations and socio-cultural practices continue to be robustly expressed in Sandakan District, Sabah, Malaysian Borneo.
Respondents indicated several distinct themes differentiating
hunting practices in oil palm plantations and forest.Additionally, many hunters-particularly older hunters who started hunting before 1985-perceived changes in bearded pig behaviour over time.
Hunter dietary patterns also revealed important differences in meat consumption between village and city life.However, hunting motivations and techniques were consistent with past records of hunting practices within Indigenous Bornean communities.Together, these results point to the endurance and transformation of hunting practices within our respondent pool, and suggest a need for long-term hunting management that accommodates meat provision, KDM socio-cultural practices and bearded pig populations.
| Oil palm-associated changes in contemporary KDM-bearded pig hunting practices
The different characteristics reported between hunting in oil palm plantations and forests indicate an important shift in contemporary KDM hunting practices.With roughly a quarter of Sabah's land area now under plantation agriculture, mostly oil palm (Gaveau, Sheil, et al., 2016), and the majority of our study area under oil palm agriculture (Figure 1), shifting hunting practices in oil palm plantations carry important implications for people and pigs across Sabah.For KDM people, the qualities of the pig hunting experience have already changed substantially.Our respondents noted that hunting in oil palm typically involves more waiting for pigs to forage on oil palm fruits at predictable locations, and that they can more easily see and find pigs in the wider, open environment of an oil palm plantation.Respondents also mentioned that hunting in oil palm plantations is typically easier and less tiring, requiring less walking for extended distances as compared to hunting in forests, and sometimes involving hunting from a car.In Sabah, just two decades ago the vast majority of bearded pig hunting took place in forest contexts and typically on foot with a gun (Bennett et al., 2000), and for millennia across Borneo bearded pig hunting took place in a habitat defined primarily by tropical forests (e.g.Medway, 1964;Prentice et al., 2011).By contrast, many village settings in our study area are located adjacent to, or even within, agricultural landscapes, which are disproportionately associated with higher pathogen infection rates and zoonotic disease emergence (Rohr et al., 2019;Shah et al., 2019).The increase in contemporary bearded pig hunting within oil palm plantations therefore raises important concerns about potential public health risks to KDM pig hunters and communities.For example, in northern Sabah, deforestation and related environmental change have been associated with higher numbers of cases of Plasmodium knowlesi, which causes human malaria (Fornace et al., 2016).Future research should investigate whether increased bearded pig hunting within oil palm plantations is linked to increased contact with animal vectors carrying infectious diseases and to higher rates of infectious disease transmission to humans.
Pest control was a common hunting motivation among our respondents, highlighting another major influence of oil palm cultivation on pig hunting patterns.More than half of our respondents cited pest control as a motivation to hunt bearded pigs.Three quarters of our respondents worked in oil palm at some point in their lives, many of them as smallholders and some in industrial oil palm plantations.
In both settings, bearded pigs are often regarded as crop pests due to their rooting behaviour, similar to that of the Eurasian wild boar S. scrofa, which also damages young oil palm trees in plantations (Jambari et al., 2012;Luskin et al., 2014), with potentially important economic implications.Jambari et al. (2012) also reported pest control of wild boar as an important hunting motivation in oil palm plantations in Peninsular Malaysia.
In addition to the other influences of oil palm cultivation on pig hunting, five respondents noted the different taste of bearded pig meat from oil palm and forest, with three expressing a clear preference for pig meat from forest (e.g.noting the meat tasted sweeter, and/or less smelly, from forest as compared to oil palm plantations).Taken together, our findings suggest that oil palm expansion may be reshaping a variety of environmental, technical, economic and alimentary aspects of contemporary KDM socio-cultural practices linked to bearded pigs.
| Perceived changes in the behavioural ecology of bearded pigs
When asked if they had noticed a change in bearded pig behaviour over the last several decades, 17 hunters noted that pigs today are 'wilder' or 'smarter' -seemingly more skittish-as compared to the Rangifer tarandus (Reimers et al., 2009) and red deer Cervus elaphus (Chassagneux et al., 2020).
Further research could investigate the causes and mechanisms of these changes in bearded pig behavioural ecology.High behavioural plasticity, which has been suggested as an adaptive response of red deer in Norway (Lone et al., 2015), could be a mechanism, as could evolutionary selection for individuals with elevated flight response.
Further research could also investigate whether habitat fragmentation and oil palm expansion provide a ripe context for these potential mechanisms for behavioural shifts.Our study area in Sabah has high hunting accessibility (Deith & Brodie, 2020), which could elevate the actual or perceived risk to wildlife in the area and create 'landscapes of fear' (Gaynor et al., 2019).Recent ecological evidence from Sabah suggests substantial rates of bearded pig crop raiding in oil palm plantations (Davison et al., 2019;Love et al., 2018), which was widely reported among our respondent pool.We therefore hypothesize that bearded pigs in many parts of Sabah are employing a 'high-risk, high-reward' strategy of feeding on cross-border oil palm fruit subsidies, providing access to high-fat food resources but also elevating predation risk due to human hunting in oil palm plantations (Meijaard et al., 2018), potentially causing elevated flight response in pigs in human-modified landscapes.
Finally, responses from hunters suggest that further research should investigate links between oil palm-associated fragmentation and bearded pig nomadic movements.In our study, several older hunters had seen or heard of movements of large herds of bearded pigs, a behaviour thought to indicate historical patterns of bearded pig nomadism (Caldecott et al., 1993).Younger hunters, however, had typically not observed this aggregating behaviour among bearded pigs.This pattern is consistent with speculation of declines of bearded pig nomadism in the literature due to habitat fragmentation (e.g.Luskin & Ke, 2018).Moreover, oil palm fruit subsidies to bearded pigs-shown to be strongly associated with wild boar feeding and reproduction (Luskin, Brashares, et al., 2017)-could reduce or eliminate the ecological basis for bearded pigs to make nomadic movements at all.As has been shown with logging (Granados et al., 2019), we hypothesize that oil palm-driven habitat fragmentation is causing a reduction in bearded pig responses to mast fruiting events.We also hypothesize that, across Borneo, there is a loss of traditional ecological knowledge of these migrations and hunting practices associated with them (Figure 5).Further research should investigate this hypothesis through social and ecological studies of habitat fragmentation, long-range pig movements, social memory and traditional ecological knowledge.
| Urbanization as a driver of changes in contemporary KDM pig hunting practices
Shifted dietary patterns and reduced hunting tied to urbanization reflected important elements of change in our study.In urban contexts, hunter responses suggested that bearded pig was a favoured delicacy but not an indispensable food source given the widespread availability of wild fish and domestic chicken and pork.While bearded pig was the fourth most commonly consumed animal protein for our respondents in urban contexts, in village contexts bearded pig was the second most consumed animal protein (Figure 2).As urbanization increases in Sabah (Cai, 2018), our study suggests that reduction of bearded pig consumption levels in urban contexts may be one way in which reliance on bearded pig meat is lessening in modern times.Additionally, the time commitments related to urban jobs and increased distance from hunting locations resulted in lower hunting for seven of our respondents.KDM community but also the hunting relationship that has connected people and pigs across Borneo for millennia (Medway, 1964).
| Enduring links between historical and contemporary KDM pig hunting practices
While KDM pig hunting practices appear to be changing in important ways, motivations and techniques to hunt bearded pigs spoke to enduring links between KDM communities and pigs.The hunting motivations we recorded among KDM hunters in Sandakan District are in step with the outcomes Bennett et al. (2000) recorded in Sabah and Sarawak, with meat provision as the primary motivation for bearded pig hunting.Presumably, meat provision was also the primary motivation for Indigenous bearded pig hunting across Borneo for millennia, based on archaeological dig sites showing bearded pig bones in sites used for food consumption (Medway, 1964).Additionally, Bennett et al. (2000) found that wild meat presence in rural villager diets was directly related to the abundance of bearded pigs in the forest, and unrelated to alternative sources of food and income.Thus, bearded pigs were generally hunted if they were locally available, whether or not local communities were directly reliant upon them.Some hunters in our study did not rely on bearded pig meat; however, we also encountered several hunters who regarded bearded pig meat as essential to their livelihoods and food security.For example, in describing his motivation to hunt, Respondent 10 said simply: 'It's a matter of survival.'('Pasal-untuk survive lah.')Finally, as there was no significant difference in hunting techniques used by older and younger hunters (i.e.hunters who began hunting before or after 1985), our results suggest that common bearded pig hunting approaches-a blend of modern and traditional techniques (Figure 4)-have likely persisted for at least the last two generations of hunters.
Our findings showed that the bearded pig continues to be a cultural keystone species for the KDM respondents in our study (Garibaldi & Turner, 2004).Respondents emphasized the importance of bearded pig meat at ceremonial and cultural events.
Weddings, church events, family gatherings, festivals, birthdays and other celebratory occasions were considered by many hunters to be incomplete without wild meat, typically bearded pig.As Respondent 10 noted: 'The bearded pig is our tradition.For celebrations you only use the bearded pig'.(Note: Other wild game meat is still used by some; for example, one hunter mentioned feral buffalo in connection with celebrations.However, bearded pig meat is indeed standard fare at many KDM cultural events.)Barbecued, sautéed or roasted bearded pig was widely considered a favourite delicacy among our respondent pool, and for many the sharing and consuming of this delicacy constituted a centrepiece of communal celebrations.The significance of bearded pig meat for cultural events is also evident in the high proportion of respondents (54%) who ranked 'gift-giving' as a secondary motivation to hunt.Sharing bearded pig meat, in everyday moments and at special events, has been part and parcel of many Indigenous societies in Borneo (Chin, 2001;Wadley et al., 1997); our results indicate that this species continues to be a cultural touchstone for KDM respondents in our study.
| Regulatory factors influencing contemporary bearded pig hunting practices
State-wide regulations and enforcement may be playing a role in reducing the frequency of KDM hunting of bearded pigs.As Respondent 12 shared, 'Now, you just buy pig [rather than hunt it yourself] because either you're busy or you're afraid of the law' ('Sekarang, beli babi jak-sibuk-takut undang-undang').Important conservation legislation was passed in the 1990s, requiring licences for hunting bearded pig and other game species (Wildlife Conservation Enactment 1997), and enforcement has increased in many areas of the state (e.g.Latip et al., 2015).Many respondents were aware of hunting regulations, as has been shown for hunters in northern Sabah as well (Wong et al., 2012).We hypothesize that the permitting system and/or enforcement of hunting laws could be influencing the frequency of hunting behaviour in Sabah.While our study was not designed to directly understand this relationship, future work addressing the link between wildlife law enforcement and KDM pig hunting would be a valuable contribution to understanding pathways for sustainable biocultural conservation in Sabah.Adding to dynamics between hunters and law enforcement agencies, in 2020 hunting licenses were frozen by the Sabah Wildlife Department due to the Movement Control Order put in place during COVID-19 (Chan, 2021;The Star, 2021).With the confirmed spread of African Swine Fever to multiple Sabah districts in early 2021, the Wildlife Department maintained the freeze on hunting licences and prohibited the selling of sinalau bakas, a popular smoked or barbecued form of wild bearded pig meat (The Borneo Post, 2021).For biocultural conservation of the KDM-bearded pig socio-ecological system, we recommend that local and state government officials and conservation managers consider fair, location-specific management approaches.For example, these approaches might consider rural and urban consumption of bearded pig meat in different ways, or regulate subsistence wild meat consumption by a separate standard from commercial sale.Moreover, these approaches should include local KDM and other Indigenous peoples (Bridgewater & Rotherham, 2019), to elevate their voices, preserve culturally important practices and ensure food security for communities that rely on wild meat and fish.
| CON CLUS ION
Our results speak to both the endurance and reshaping of historical hunting practices among contemporary KDM communities in Sabah, Malaysia.Several important hunting motivations and techniques were maintained among our respondents, including meat provision as the primary motivation to hunt and hunting with guns as the primary technique used for bearded pigs over at least the last two generations.However, our findings also indicate that KDM hunting practices have changed substantially, with oil palm plantations: (a) likely providing a more common pig hunting environment than recorded in the past in Sabah and (b) serving as a context for reshaped hunting practices by KDM hunters in our study as compared to their hunting practices in forest.Additionally, urbanization has led to lowered levels of bearded pig meat consumption and less time for some KDM people in our study to hunt bearded pigs.Our results show both the persistence and malleability of Indigenous KDM pig hunting practices.Amidst ongoing oil palm expansion, urbanization-related dynamics and broader political-economic changes, environmental governance initiatives should support robust cultural traditions while ensuring sustainable wildlife populations.Through inclusive, collaborative planning and locationspecific regulation, bearded pig management plans can ensure fair access to the meat provision, socio-cultural benefits and pest control supplied by sustainable bearded pig hunting while also ensuring long-term conservation of bearded pig populations, ecological functions and habitat.
[
stop eating] the pig'.('Kami tidak boleh tinggalkan babi'.)For many respondents, hunting bearded pigs was also regarded as an important form of pest control to limit bearded pig disturbance of oil palm plantations (both industrial and smallholder) and garden crops, such as cassava and durian.Multiple hunters also referenced the importance of sharing bearded pig meat communally at parties, weddings, marriages, Christian events and other celebrations, and the community expectations that therefore motivated them to hunt.One hunter shared that during certain months 'there are many requests' [to supply bearded pig meat], due to seasonal parties and celebrations.Several respondents also mentioned satisfaction in their hunting ability; for example, Respondent 5 said, 'Only the village people have what it takes to know what the pig needs' ('Only the kampung punyai people men know what the babi need bah').
in connection with oil palm plantations.Other factors dictating the use of different techniques included success rate, effort and cost required, personal preference, and availability of tools such as guns and ammunition.For example, Respondent 13 commented: 'Who in the world would use a snare when you have a gun!' ('Mana ada mahu jerat sudah!Ada senapang'.)Hunting techniques specific to long-distance bearded pig movements were not reported among our respondents.Hunting success was highly variable, with hunters citing success rates per hunt ranging from roughly 25% to 100%.Hunt lengths generally varied between several hours to a full day or night.F I G U R E 3 Common motivations of respondents (n = 37 overall, n = 34 primary) to hunt bearded pig.'Overall' motivations indicate that a motivation was affirmed by a given hunter (regardless of rank order), whereas 'primary' motivations indicate that the motivation was listed as the number one motivation to hunt for that respondent F I G U R E 4 Proportion of KDM hunters within respondent pool (n = 34) who had used a variety of traditional (T) and modern (M) techniques for hunting bearded pig past.Respondent 16, for instance, claimed 'In the past they weren't wild,[but] now they are more wild to hunt' ('Dulu tidak liar, sekarang liar diburu', where wild means quick to flee or harder to catch).Similarly, Respondent 3 commented 'They are a bit wilder' ('Ada liar sikit') and said 'It means he [the pig] has an IQ' ('Bermakna dia ada IQ').A number of hunters noted that pigs have become increasingly sensitive to hunter presence, including stimuli such as gunshots, gunpowder smell, human smell or headlamp lights.Hunters explained that the pigs respond to these stimuli by fleeing more readily than in the past (Box 1).Rapid fleeing behaviour in response to human hunting has also been recorded in other ungulates, including duikers (multiple species;Croes et al., 2007), reindeer
BOX 1 Selected English translations (from Bahasa Melayu [Malay]) of quotations from respondents who perceived changes in bearded pig behaviour over time Qualitative evidence of changes in pig behaviour
'Yes there's a change.The pigs today have already become wild.Pigs today are afraid of men.In the past they wouldn't run from men.It was much easier to hunt pigs in the past'.-Respondent20 'In the past pigs only looked, but now they run away.Now the pig has got a high school certificate'.-Respondent4
|
2021-09-25T15:45:33.484Z
|
2021-08-26T00:00:00.000
|
{
"year": 2021,
"sha1": "a038adf070bb37d994ea45494375dfbd254a3560",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pan3.10250",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "549e4e23bda0b9f36cda19c164f4d679115254c9",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
229298030
|
pes2o/s2orc
|
v3-fos-license
|
Observation of non-Hermitian topology with non-unitary dynamics of solid-state spins
Non-Hermitian topological phases exhibit a number of exotic features that have no Hermitian counterparts, including the skin effect and breakdown of the conventional bulk-boundary correspondence. Here, we implement the non-Hermitian Su-Schrieffer-Heeger (SSH) Hamiltonian, which is a prototypical model for studying non-Hermitian topological phases, with a solid-state quantum simulator consisting of an electron spin and a $^{13}$C nuclear spin in a nitrogen-vacancy (NV) center in a diamond. By employing a dilation method, we realize the desired non-unitary dynamics for the electron spin and map out its spin texture in the momentum space, from which the corresponding topological invariant can be obtained directly. Our result paves the way for further exploiting and understanding the intriguing properties of non-Hermitian topological phases with solid-state spins or other quantum simulation platforms.
While Hermiticity lies at the heart of quantum mechanics, non-Hermitian Hamiltonians have widespread applications as well [1][2][3]. Indeed, they have been extensively studied in photonic systems with loss and gain [4][5][6][7][8], open quantum systems [9][10][11][12][13][14], and quasiparticles with finite lifetimes [15][16][17], etc. More recently, the interplay between non-Hermiticity and topology has attracted tremendous attention [18,19], giving rise to an emergent research frontier of non-Hermitian topological phases of matter. In contrast to topological phases for Hermitian systems [20][21][22], non-Hermitian ones bears several peculiar features, such as the skin effect [23][24][25] and breakdown of the conventional bulk-boundary correspondence [24][25][26][27][28][29], and new topological classifications [30][31][32]. Experimental observations of the non-Hermitian skin effect have been reported in mechanical metamaterials [33], non-reciprocal topolectric circuits [34], and photonic systems [35][36][37]. However, despite the notable progress, direct observation of the topological invariant for non-Hermitian systems has not been reported in a quantum solid-state system hitherto, owing to the stringent requirement of delicate engineering of the coupling between the target system and the environment in implementing non-Hermitian Hamiltonians. In this paper, we carry out such an experiment and report the direct observation of non-Hermitian topological invariant with a solid-state quantum simulator consisting of both electron and nuclear spins in a NV center (see Fig. 1).
NV centers in diamonds [38] exhibit atom-like properties, such as long-lived spin quantum states and well-defined optical transitions, which make them an excellent experimental platform for quantum information processing [39][40][41][42][43][44], sensing [45][46][47], and quantum simulation [48][49][50]. For Hermitian topological phases, simulations of three dimensional (3D) Hopf insulators [48] and chrial topological insulators [49] with NV centers have been demonstrated in recent experiments, and observations of their topological properties, such as nontrivial topological links associated with the Hopf fibration and the integer-valued topological invariants, have been reported. A key idea that enables these simulations is to use the adiabatic passage technique, where we treat the momentum-space Hamiltonian as a time-dependent one with the momentum playing the role of time. The ground state of the Hamiltonian at different momentum points can be obtained via adiabatically tuning the frequency and the amplitude of a microwave that manipulates the electron spin in the center, and quantum tomography of the final state with a varying momentum provides all the information needed for obtain-ing the characteristic topological properties [48]. Nevertheless, simulating non-Hermitian topological phases with the NV center platform (see Fig.1 (a, b)) faces two apparent challenges. First, for non-Hermitian systems the governing Hamiltonians typically have complex eigenenergies and the conventional adiabatic theorem is not necessarily valid in general [51]. As a consequence, the adiabatic passage technique does not apply and the preparation of eigenstates of non-Hermitian Hamiltonians becomes trickier. Second, the non-Hermiticity requires a delicate engineering of the coupling between the targeted system and the environment, so that tracing out the environment could leave the system effectively governed by a given non-Hermitian Hamiltonian. These two challenges make simulating non-Hermitian topological phases notably more difficult than that for their Hermitian counterparts. In this paper, we overcome these two challenges and report the first experimental demonstration of simulating non-Hermitian topological phases with the NV center platform. In particular, we implement a prototypical model for studying non-Hermitian topological phases, i.e., the non-Hermitian SSH model, by carefully engineering the coupling between the electron and nuclear spins through a dilation method that was recently introduced for studying paritytime symmetry breaking with NV centers [52]. Without using adiabatic passage, we find that the non-unitary dynamics generated by the non-Hermitian Hamiltonian will autonomously drive the electron spin into the eigenstate of the Hamiltonian with the largest imaginary eigenvalue, independent of its initial state. The topological nature of the Hamiltonian can be visualized by mapping out the spin texture in the momentum space and the topological invariant can be derived directly by a discretized integration over the momentum space.
We consider the following non-Hermitian SSH model Hamiltonian in the momentum space [25,26]: where γ measures the energy scale (we set = 1 for simplicity), h x = v + r cos k, h z = r sin k, and σ x,z are the usual Pauli matrices. This Hamiltonian possesses a chiral symmetry σ −1 y H(k)σ y = −H(k), which ensures that its eigenvalues appear in (E, −E) pairs. Its energy gap closes at the exceptional points (h x , h z ) = (± 1 2 , 0), which gives v = r ± 1 2 for k = π and v = −r ± 1 2 for k = 0. The topological properties for the Hamiltonian can be characterized by the winding number w of H(k), circling around the exceptional points as k sweeps through the first Brillouin zone [26]: w = 0, 1 2 , and 1 respectively, if H(k) encircles zero, one, and two exceptional points. A sketch of the phase diagram of the non-Hermitian SSH model is shown in Fig. 1 c.
To implement the non-Hermitian Hamiltonian H(k) with the NV center platform, we exploit a dilation method introduced in Ref. [52]. We use the electron spin as the targeted system and a nearby 13 C nuclear spin as the ancillary qubit. Suppose that the dynamics of the electron spin is described by H e and the dilated system described by H e,n , then the problem Through optical pumping, we first polarize the electron and nuclear spins onto |0 e and | ↑ n, respectively. Then, rotations along xand y-axises will prepare the dilated system onto the state |Ψ(0) = | − 1 e|− n + η(0)| − 1 e|+ n. The evolution box implements the unitary dynamics generated by the dilated Hamiltonian He,n, after which we measure the nuclear spin in the |± basis. A postselection of nuclear spin in the |− state collapses the electron spin into the desired eigenstate of He = H(k) for a given momentum k. essentially reduces to a task that for a given momentum k we need to carefully engineer H e,n , such that H e equals H(k) after projecting the nuclear spin onto a desired state. The basic idea is as follows. We consider a quantum state |ψ evolving under a non-Hermitian Hamiltonian H e , which satisfies the Schrödinger equation i ∂ ∂t |ψ(t) = H e |ψ(t) . Then we introduce a dilated state |Ψ(t) = |ψ(t) |− + η(t)|ψ(t) |+ governed by the dilated Hermitian Hamiltonian H e,n . Here, . For our purpose, the dilated Hamiltonian should be designed properly as (see Supplementary Information): where I is the two-by-two identity matrix, and A i (t) and B i (t) (i = 0, 1, 2, 3) are time-dependent real-valued functions determined by H e . After the time evolution process, we can project the nuclear spin onto its |− −| subspace to obtain |ψ(t) .
Unlike the case of simulating Hermitian topological phases [48,49], where the ground state of the Hamiltonian at different momentum points can be obtained through adiabatic passages, for non-Hermitian Hamiltonians their eigenvalues are complex numbers in general and the adiabatic passage method does not apply. Fortunately, we can explore the nonunitary dynamics generated by the non-Hermitian Hamiltonian to prepare the eigenstate that corresponds to the eigenvalue with the largest imaginary part. To be more specific, suppose the electron spin is initially at an arbitrary state |ψ(0) = α 1 |R 1 + α 2 |R 2 , where |R 1,2 are the right eigenstates of H e corresponding to the eigenvalues λ 1,2 . Without loss of generality, we assume Im(λ 1 ) > Im(λ 2 ). Then the electron spin state will decay to |R 1 in the long time limit. As a result, we can prepare the eigenstate |R 1 of H e by just waiting long enough time for the system to decay to this state.
To experimentally realize H e,n , we apply two microwave pulses with time-dependent amplitude, frequency and phase. We explore the state evolution by monitoring the population on |0 e state and see how it decays to the desired eigenstate of H e . In Fig. 2 (a), we show the quantum circuits used in our experiment. Fig. 2 (b) and (c) show our experimental results of P z,x 0 as a function of time. From these figures, it is evident that our experimental results match the theoretical predictions excellently, within the error bars for almost all of the data points. In addition, after long enough evolution time (about 1.5 µs in our experiment), the electron spin state decays to the desired eigenstate of H(k) for different momentum k. This indicates that our dilated Hamiltonian indeed effectively implement the non-Hermitian H(k) in our experiment.
We mention that in Fig. 2(c), we measure P 0 in the x-basis, which requires an π/2 rotation of the electron spin. In order to avoid off-resonance driving, the microwave driving power should be weak enough. The Rabi frequency should be much smaller than the hyperfine coupling strength (13.7 MHz), so that the π/2 rotation would take time on the order of a microsecond. Meanwhile, the time evolution process also takes approximately half of the system's coherence time (T 2 = 3.3 µs). Thus, adding an additional rotation would not only increase the processing time but also introduce both gate and decoherence errors. To avoid this, we first apply a unitary transform to the target Hamiltonian: We evolve the electron spin with H e instead of H e and measure the final state in the z-basis. This is equivalent to evolving the electron spin with H e and then measuring in the x-basis, but with improved efficiency and accuracy (see the Supplementary Information). In addition, the time needed for the initial state to decay to the desired eigenstate of H e depends crucially on the difference between the imaginary parts of its two eigenvalues. For some parameter regions, this difference may not be large enough and the decay time could even be longer than T 2 . In order to speedup the process, we increase γ according to the specific parameters so that we can finish the experiment within the coherence time.
To probe the topological properties of the non-Hermitian SSH model, we can measure σ z and σ x for the final state of the time evolution [which is basically the eigenstate |R 1 of H(k)] as k sweeps the first Brillouin zone. We plot our experimental results in Fig. 3. When H(k) encircles no or two exceptional points, the eigenvector of H(k) are 2π periodic in k, and the trajectory of σ z and σ x as k sweeps through the Brillouin zone forms closed circles. In this case, the winding number is zero or one, depending on whether the trajectory of σ z and σ x winds around the origin or not, as clearly shown in Fig. 3(a) and (c). In contrast, when H(k) encircles only one exceptional point, the eigenvector will have a 4π periodicity and k must sweep through 4π to close the trajectory, giving rise to a fractional value of the winding number w = 1 2 if k only sweeps through the first Brillouin zone [26]. This is also explicitly observed in our experiment as shown in Fig. 3(b). We mention that in Fig. 3(c), when k sweeps across π, the imaginary part of the eigenvalues of H(k) will exchange their sign, leading to a leap from one eigenstate to another. Mathematically, we can prove that R 1 |σ z,x |R 1 = − R 2 |σ z,x |R 2 and R 1 |σ y |R 1 = R 2 |σ y |R 2 (see Supplementary Information). As a result, in the experiment we obtain R 1 |σ z,x |R 1 by actually measuring R 2 |σ z,x |R 2 after the crossing of the eigenstates. In addition, with our experimentally measured data the winding number can also be calculated directly through a discretized integration over the momentum space (see Supplementary Information). In Table I, we show the winding number calculated from the experimental and theoretically simulated data for different parameter values of r. From this table, it is clear that the winding number calculated from the experimental data matches its theoretical predictions within a good precision, and is in agreement with that obtained from the trajectory of σ z and σ x as well.
In summary, we have experimentally observed the non-Hermitian topological properties of the SSH model through non-unitary dynamics with a solid state quantum simulator. Our method carries over straightforwardly to other types of non-Hermitian topological models that are predicted to exist in the extended periodic table but have not yet been observed in any experiment. It thus paves the way for future explo-rations of exotic non-Hermitian topological phases with the NV center or other quantum simulation platforms.
We Our sample is mounted on a laser confocal system. A 532 nm green laser is used for off-resonance excitation. The laser pulses are modulated by an acousto-optic modulator (AOM). We use an oil objective lens to focus the laser beam onto the damond sample. In addition, this lens is also used to collect the fluorescence photons. The fluorescence photons are detected by a single photon detector module (SPDM) and counted by a homemade field-programmable gate array (FPGA) board. The 480 Gauss magnetic field is provided by a permanent magnet along the NV axis.
An arbitrary waveform generator (AWG, Techtronix 5014C) is used to generate low frequency analog signals and transistor-transistor logic (TTL) signals. Two of the analog signals are used to modulate the carrier microwave (MW) signal (generated by a MW source, Keysight N5181B) through an IQ-mixer so as to control its phase, amplitude and frequency conveniently. Another analog signal of AWG is used for generating the radio frequency (RF) signal. The TTL signals are used for modulating the laser pulse and providing gate signals for the FPGA board. The MW signal is applied onto the sample through a homemade MW coplanar waveguide. The RF signal is applied through a homemade coil. Before applied onto the sample, both MW and RF signals are amplified via amplifiers (Mini Circuits ZHL-30W-252-S+ for MW and Mini Circuits LZY-22+ for RF).
THE SAMPLE
Our experiment is performed on an electronic grade diamond produced by Element Six with a natural abundance of 1.1% for 13 C. The crystal orientation is along the 100 direction. The electron spin in the NV center is used as the target qubit and a nearby 13 C nuclear spin is used as the ancilla qubit. We use the Ramsey experiment to obtain the T * 2 of the electron spin (see Fig. 4). For the sample used in our experiment, the T * 2 is measured to be 3.3µs. Since the time evolution pro-cess employed in our experiment is no longer than 1.8µs, so the coherence time is long enough for our purpose.
Spin initialization
In this section, we briefly introduce the spin initialization process in our experiment. A 532nm green laser can be used to off-resonantly excite the NV center. Because of the intersystem crossing (ISC) [53], this process can be used to initialize the electron spin state onto the |m s = 0 state. The initialization fidelity is estimated to be 89% [54]. Meanwhile, when we apply a magnetic field around 500 Gauss (which is 480 Gauss in our experiment), there exists a flip-flop process between electron and nuclear spins due to the excited-state level anticrossing (ESLAC) [55], when the electron spin is in the excited state. Thus, the optical pumping process will polarize not only the electron spin, but also the nuclear spin [56].
The system Hamiltonian
The NV center has a spin-triplet ground state. Together with a strongly coupled 13 C nuclear spin, it forms a highly controllable two-qubit system. By applying an external magnetic field along the quantization axis, the Hamiltonian of the NV center can be written as: where we use secular approximation to dump all terms which are not commute with S z . Here, D = 2π × 2.87GHz is the zero-field splitting of electron spin; ω e = γ e B (ω n = γ n B) is the Zeeman splitting of the electron (nuclear) spin; A zz = 2π × 13.7MHz is the hyperfine coupling strength. In our experiment, we use only two level states |0 and | − 1 of the electron spin, as shown in Fig. 1b in the main text. The Hilbert dimension of the dilated system is four, whose basis vectors are denoted as |0 e , ↑ n , |0 e , ↓ n , | − 1 e , ↑ n and | − 1 e , ↓ n . In this subspace, the Hamiltonian in Eq. (2) reduces to:
Spin state readout
We can readout the spin state by the spin-dependent photoluminescence (PL) rate [57]. The existence of ISC will lead to a decrease of fluorescence rate when electron is in its | − 1 state. Meanwhile, due to the ESLAC, different nuclear spin states will also cause a difference on the fluorescence rate. For convenience, we label the states |0 e , ↑ n , |0 e , ↓ n , | − 1 e , ↑ n , and | − 1 e , ↓ n with numbers from 1 to 4, and denotes their corresponding populations as P i with i = 1, 2, 3, 4. By optical pumping, we initialize the system onto |0 e , ↑ n state. By flipping the population onto different states, we can get the PL rates of each different state (N i ,i = 1, 2, 3, 4).
After the time evolution process, the state of our system is |Ψ(t) = |ψ(t) e |− n +η(t)|ψ(t) e |+ n . We apply a nuclearspin π/2 rotation to rotate the state into |Ψ(t) = |ψ(t) e | ↑ n + η(t)|ψ(t) e | ↓ n . Here what we want is the expectation value of σ z for the state |ψ(t) . This can be achieved by the renormalized population of state |0 e , ↑ n in the | ↑ n subspace (i.e. P1 P1+P3 ). Hence, we need to know all the populations on the four energy levels. The PL rate of the final state can be described as After the time evolution process,by flipping the populations between different states, and measure the corresponding PL rates [58], we can solve the following linear equations to obtain P i s: Here, π ij represents the π-pulse between state i and statej; N π13π34 f means that before measuring the PL rate, we apply π 34 and π 13 to flip the population sequentially. We use the maximum likelihood estimation [59] method to reconstruct the final population P i (i = 1, 2, 3, 4) under the normalization constraint P 1 + P 2 + P 3 + P 4 = 1.
Construct the dilated Hamiltonian
In this section, we give more details on how to implement the non-Hermitian SSH model, which is crucial in studying its topological properties. Suppose we have a state |ψ , which evolves under a non-Hermitian Hamiltonian H e described by the following Schrödinger equation (we set = 1 here): Now we want to find a dilated Hamiltonian H e,n , and a dilated system where We take the dilated Hamiltonian the form mentioned in Ref [52], such that where Here the time-dependent operator M (t) satisfies Also, we should choose M (0) properly to make sure M (t)−I remains positive throughout the experiment.
Realize the dilated Hamiltonian in NV center system
In this subsection, we introduce how to realize the Hamiltonian H e,n in our NV-center system. To begin with, we expand the Hamiltonian in terms of Pauli operators where A i (t) and B i (t) (i = 0, 1, 2, 3) are time-dependent real parameters. In Fig. 5, we show the time dependence of A i (t) and B i (t), with parameters set as η(0) = 8, γ = 3.5, v = 0.3, and r = 1. The Hamiltonian of our NV system is described in Eq. 3. Now we apply two individual MW pulses to selectively drive the two different electron spin transitions that are shown in Fig.1b in the main text. The Hamiltonian induced by the MW pulse can be written as: By choosing the rotating frame we have the effective Hamiltonian By dumping the fast oscillating terms (the rotating wave approximation), we have Comparing Eq. 17 with Eq. 13, we obtain the following experimental parameters: , , In Fig. 6, we plot the time-dependence of the MW detuning (δ i ), amplitude (Ω i ), and phase (φ i ).
ROTATION OF THE HAMILTONIAN
As mentioned in the main text, instead of measuring the σ x directly, we rotate the whole system with U = where | ψ = U † |ψ is the eigen vector of H e with the same eigenvalue λ due to the following equation
RELATIONS BETWEEN EXPECTATION VALUES FOR DIFFERENT EIGENSTATES
For a Hermitian Hamiltonian, the eigenstates with different eigenvalues are orthogonal to each other. However, this is not necessarily true for non-Hermitian Hamiltonians. In general, the non-Hermitian SSH Hamiltonian considered in this work has two right eigenvectors (|R 1 and |R 2 ) and two left eigenvectors (|L 1 and |L 2 ). The Hamiltonian reads: The right eigenstates are: And the left eigenstates are: where tan θ = −(v + r cos k)/(r sin k + i/2). Now we calculate the expectation values of σ x,y,z for these eigenstates as follows: In the experiment, the initial state of the electron spin will decay to one of these eigenstates, and state crossing may occur at certain momentum points. As a result, we may need the above relations to draw the whole trajectory of the eigenstates on the Bloch sphere. Moreover, later in this Supplementary Information, we need the left eigenstates to calculate the topological index. We can use these relations to reconstruct the states with our experimental data.
DIRECT CALCULATION OF THE TOPOLOGICAL INDEX WITH EXPERIMENTAL DATA
In the experiment, we have chosen three groups of Hamiltonian parameters corresponding to three different topological regions, with winding number w = 0, 1 2 , 1 respectively. For each case, we discretize the first Brillouin zone and choose a list of momentum points k from 0 to 2π. We employ the dilation method to prepare the electron spin onto an eigenstate of the non-Hermitian Hamiltonian H(k) and measure σ x and σ z with varying k. The k points we used in the experiment and the corresponding results are shown in Tables II, III and IV. Now, we show how to calculate the winding number by using our experimentally data. For a non-Hermitian Hamiltonian, the winding number is defined as [60] where |L n k and |R n k are the left and right eigenvectors respectively as we mentioned before, n labels the band index. Ref. [61] proposes a more efficient way(Eq. 31) to calculate the complex winding number for the discretized Brillouin zone. This allows us to calculate the winding number with our discretized experimental data.
For the r = 0.3 case, the Hamiltonian encircles only one exceptional point and its eigenstate is 4π periodic for the momentum k. Thus, k must sweep through 4π to close the trajectory. In our experiment, we do not sweep k for the region from 2π to 4π. But, we can use the relations in Eqs. (25)(26)(27)(28)(29) to deduce σ x,y,z for this region. In addition, all the left eigenstates can be obtained from those relations as well. In Table I in the main text, we show the winding numbers calculated directly by using our experimental data. From this table, it is clear that the calculated winding number matches its theoreti-cal predictions excellently. For the r = 0.3 case, we find that the winding number is 1 if we integrate k from 0 to 4π and conclude that w = 1 2 if k only sweeps from 0 to 2π, which is consistent with Ref. [26,62]
|
2020-12-18T02:15:47.224Z
|
2020-12-16T00:00:00.000
|
{
"year": 2020,
"sha1": "ae92d701e2dd94e16e2d2496b2746f3f3c5d57b4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.09191",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ae92d701e2dd94e16e2d2496b2746f3f3c5d57b4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
221892442
|
pes2o/s2orc
|
v3-fos-license
|
Frictional active Brownian particles
Pin Nie ,1,2 Joyjit Chattoraj,1 Antonio Piscitelli,1,3 Patrick Doyle,2,4 Ran Ni,5,* and Massimo Pica Ciamarra 1,3,† 1School of Physical and Mathematical Science, Nanyang Technological University, Singapore 637371, Singapore 2Singapore-MIT Alliance for Research and Technology, Singapore 138602, Singapore 3CNR–SPIN, Dipartimento di Scienze Fisiche, Università di Napoli Federico II, I-80126 Naples, Italy 4Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA 5School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore 637459, Singapore
I. INTRODUCTION
The interaction force between macroscopic objects in direct physical contact has a frictional component. In colloidal hard-sphere suspensions, direct interparticle contacts are generally suppressed by frictionless repulsive forces, of electrostatic or polymeric origin, which are needed to stabilize the suspension, as well as by lubrication forces [1]. Hence, in these systems, frictional forces are generally negligible. Recent results [2][3][4][5] have, however, shown that in colloidal systems under shear, frictional force might become relevant. This occurs as the relative velocity between contacting particles is of order σγ , with σ particle diameter andγ the shear rate. At large enoughγ , colliding particles become able to overcome their lubrication interaction, entering into direct physical contact. The resulting frictional forces are believed to trigger the discontinuous shear thickening [2][3][4][5] phenomenology, an abrupt increase of the shear viscosity with the shear rate.
In systems of self-propelled colloidal particles the relative velocity between colliding particles could also be high. Therefore, frictional forces could play a role in these systems by affecting their distinguishing feature, which is a motilityinduced phase separation (MIPS) from a homogeneous state, to one in which a high-density liquidlike state coexists with a low-density gaslike state. While the physical origin and the features of this transition have been extensively investigated in the last few years, in both numerical model systems [6][7][8] as well as experimental realizations [9][10][11][12], the role of frictional * r.ni@ntu.edu.sg † massimo@ntu.edu.sg forces causing colliding particles to exert torques on each other has been ignored.
In this paper, we investigate the effect of frictional forces on the motility-induced phase separation of active spherical Brownian particles (ABPs), a prototypical active matter system. In this model, hard-sphere-like particles of diameter σ are equipped with a polarity n along which they self-propel with active velocity v a . The self-propelling directions change as n undergo rotational Brownian motion, with rotational diffusion coefficient D r . Thermal noise also acts on the positional degree of freedom. The motility-induced phase separation of this model is controlled by two variables, the volume fraction, φ, and the Péclet number, Pe ≡ v a /(D r σ ). Here we show that friction qualitatively affects the dynamical properties of ABPs, in the homogeneous phase, by enhancing the rotational diffusion while suppressing the translational one. Because of this, friction qualitative changes the spinodal line marking the limit of stability of the homogeneous phase in the φ-Pe plane, at high Pe. While in the absence of friction [13] the low-density spinodal line diverges at a finite volume fraction φ m > 0, in the presence of friction it diverges at φ m → 0. In this respect, friction makes the motility-induced phase diagram of ABPs closer to that observed in most active particle systems, including dumbbells [14,15] and schematic models such as run-and-tumble particles [13], active OrnsteinUhlenbeck [16], and Monte Carlo models [17], and also closer to gas-liquid transition phase diagram in passive systems. Since the frictional interaction between colloidal scale particles can be experimentally tuned [4], our result indicates that it is possible to experimentally modulate the motility-induced phase diagram by optimising the particle roughness.
The paper is organized as follows. After describing our model in Sec. II, we compare in Sec. III the frictionless and the frictional dynamics, in the homogeneous phase, showing that friction suppresses the translational diffusivity while it enhances the rotational one. The friction dependence of the motility-induced phase diagram is discussed in Sec. IV. Section V discusses the dynamics in the phase-separated region and highlights how friction promotes the stability of active clusters and hence promotes separation.
II. NUMERICAL MODEL
We consider two-and three-dimensional suspensions of active spherical Brownian particles (ABPs) with average diameter σ (polydispersity: 2.89%) and mass m, in the overdamped limit. The equations of motion for the translational and the rotational velocities are Here D 0 r and D 0 t = D 0 r σ 2 /3 are the rotational and the translational diffusion coefficients, γ is the viscosity, γ r = γ σ 2 3 , η is Gaussian white noise variable with η = 0 and η(t )η(t ) = δ(t − t ), F a is the magnitude of the active force acting on the particle andn i its direction, and F i = F i j and T i = σ i 2 (r i j × F i j ) are the forces and the torques arising from the interparticle interactions. In the absence of interaction and noise, particles move with velocity v a = F a /γ and do not rotate.
We use an interparticle interaction model borrowed from the granular community, to model frictional particles. The interaction force has a normal and a tangential component, The normal interaction is a purely repulsive Harmonic interaction, f n i j = k n (σ i j − r i j ) (σ i j − r i j )r i j , (x) is the Heaviside function, σ i j = (1/2)(σ i + σ j ), r i j = r i − r j , and r i is the position of particle i. The tangential force is f t i j = k t ξ i j , where ξ i j is the shear displacement, defined as the integral of the relative velocity of the interacting particle at the contact point throughout the contact, and k t = 2 7 k n . In addition, the magnitude of tangential force is bounded according to Coulomb's condition: | f t i j | μ| f n i j |. Working in the overdamped limit, we neglect any viscous dissipation in the interparticle interaction. In the granular model, we also neglect the presence of rolling friction [18] we expect not to qualitatively affect our results, in analogy with recent findings [19] on the role of rolling friction on discontinuous shear thickening. The value of k n is chosen to work in the hard-sphere limit, the maximum deformation of a particle being of order δ/σ 5 × 10 −4 . We simulate systems with N = 10 4 , unless otherwise stated, in the overdamped limit, with integration time step 2 × 10 −8 /D 0 r , using periodic boundary conditions. We have checked that for the considered value of N finite-size effects are negligible away from the critical point, in the range of parameters we consider. Data are collected after allowing the system to reach a steady-state via simulation lasting at least 2τ , where τ is the time at which the diffusive regime is attained we estimate from the study of the mean-square displacement. In the time interval between consecutive images a free particle moves one diameter. In these simulations, we neglect both translational and rotational noise.
A. Frictional effect on the interparticle collision
To appreciate the role of friction on the properties of ABPs, we start considering how friction affects the collision between two particles. The interparticle force acting between two colliding particles generally has a component parallel to the line joining the centers of the two particles and a tangential component. In the absence of friction, this tangential component allows the particles to slide one past the other to resolve their collision, as illustrated in the upper row of Fig. 1.
In the presence of friction, particles are not free to tangentially slide one past the other. Specifically, the tangential shearing-induced an opposing frictional force that slows down the motion of the particles, and it induces their rotation. In the absence of thermal forces, or equivalently in the Pe → ∞ limit, the frictional forces cause the particles to rigidly rotate around their contact point, so that they never resolve their collision, as is illustrated in the bottom rows of Fig. 1. At any finite Pe, the stochastic forces acting on the particles will be able to break their contact and hence the frictional forces, allowing the particles to resolve their collision. We do expect, therefore, that friction may affect the physics of ABP at high Pe, inducing a non-negligible rotation of the self-propelling directions of colliding particles.
B. Dilute phase
We start describing the effect of friction on the MIPS, comparing the dynamics of frictionless and frictional systems in the homogeneous phase. Figure 2(a) illustrates the frictionless and the frictional mean-square displacement, at different Péclet numbers, for volume fractions in the gas phase. In the absence of friction (full lines) the mean-square displacement exhibits a crossovers from a diffusive to a superdiffusive regime at t 6D 0 t /v 2 a , and from the superdiffusive to the asymptotic diffusive regime at t = 1/D 0 r [6]. In the presence of friction (symbols), similar behavior is observed, but the system enters the diffusive regime on a smaller timescale. Consequently, the translational diffusivity is also reduced as illustrated in Fig. 2(b). This finding is rationalized investigating the mean-square angular displacement [ Fig. 2(c)] and the dependence of the rotational diffusivity D r on Pe [ Fig. 2(d)]. Indeed, these quantities clarify that friction enhances the rotational diffusion of the particles, hence reducing the timescale at which the system enters the asymptotic translational diffusive regime.
We rationalize how friction leads to an increase of the rotational diffusivity, considering that in a collision a frictional particle experiences a torque, which induces the rotation of its self-propelling direction. More quantitatively, in the overdamped limit, the rotation θ i induced by a collision is proportional to the induced torque and the duration of the contact. If the contacts are at their critical Coulomb value, the typical torque magnitude is σ f t ∝ μ f n ∝ μPe, and the mean-squared angular displacement induced by a collision of duration t coll is θ 2 i ∝ μ 2 Pe 2 t 2 coll . At low density consecutive torques experienced by a particle are uncorrelated, and the number of collisions per unit time is proportional to Pe. Hence, assuming t coll ∝ Pe q , we predict for the rotational diffusivity with x = 3 + 2q. Our numerical results of Fig. 2(d) indicate x 3.5. These results indicate that the average duration of an interparticle collision slightly grows with the Péclet number, t coll ∝ Pe 1/4 . We qualitatively rationalize the dependence of the collision duration on the Péclet number considering that Pe controls the ratio between the frictional forces, which protract the duration of contacts, and the thermal ones, which eventually allow particles to resolve their collision. We have indeed observed in Fig. 1 that in the Pe → ∞ limit collisions are not resolved, so that t coll = ∞.
The dependence of the rotational diffusion coefficient on Pe and on μ allows us also to rationalize the nonmonotonic behavior of the diffusivity observed in Fig. 2(b). Indeed, in the φ → 0 limit the long-time mean-square displacement of an active particle is r 2 (t ) = 6D 0 t t + v 2 a D r t. At a small but finite density φ, we therefore expect with c(φ) a constant of order one, D r (Pe, μ) is given by Eq. (3). Equation (4) well describes the data of Fig. 2(b), with c(φ = 0.1) 0.8. Hence, the diffusivity grows as Pe 2 at small Pe, and decreases as Pe 2−x at large Pe.
IV. FRICTIONAL MIPS
We have investigated the motility phase diagram as a function of the friction coefficient μ, of the volume fraction φ, and of the Péclet number Pe ≡v a τ B /σ , where v a is the particle velocity in the φ → 0 limit, σ is the average particle diameter, and τ B = 1/D 0 r is the Brownian time, D 0 r being the rotational diffusion coefficient of the self-propelling directions in the absence of friction. We have determined the phase diagram considering the systems to be phase separated when the distribution of the local density exhibits two peaks, at the end of relative short simulations. Indeed, this ensures that phase separation has occurred via spinodal decomposition, rather than via nucleation, as we previously verified [20]. We discuss here results obtained in three spatial dimensions.
In the absence of friction, our results are in qualitative agreement with previous investigations. The increase of the Péclet number drives the phase separation of the system, but only for volume fractions above a critical value [7,[21][22][23], as illustrated in Fig. 3
(a) (circles).
We highlight how friction influences this scenario by also illustrating in Fig. 3(a) the spinodal line for μ = 0.9 (triangles). The figure reveals that friction does not appreciably influence the high-density spinodal line, while it strongly affects the low-density line does. In particular, while φ s (Pe, μ = 0) reaches a plateau as Pe increases, φ s (Pe, 0) monotonically decreases with Pe. A similar effect of friction has been reported on the volume fraction of static granular packing [24,25]. Hence, the effect of friction becomes more relevant on increasing Pe, as we anticipated in Sec. III A. Figure 3(b) further investigates this dependence illustrating the low-density spinodal line for different values of the friction coefficient. Regardless of the μ value, the spinodal line decreases on increasing Pe or μ. Figure 3(c) illustrates the value of the lower-spinodal line, at Pe = 10 3 , as a function of μ. The figure reveals that this value exponentially decreases with μ, approaching a limiting value. This exponential dependence is rationalized considering that the frictional forces, whose magnitude scale as μv a ∝ μPe, can be disrupted by thermal forces that have a constant magnitude through an activated process. Since the associated Boltzmann factor exp (−μPe) vanishes in the Pe → ∞ limit, so thus the spinodal line, for μ > 0. Hence, the frictionless case appears as a singular one. To quantitatively rationalize the interplay between friction and Péclet number, we consider that according to Eq. (3) frictional forces play a role for Pe > Pe * ∝ μ −4/7 , as found imposing μ(Pe * ) x ∝ D 0 r . Indeed, we show in Fig. 4(a) that, when plotted versus Pe/Pe * , rotational diffusivity data corresponding to different values of the friction coefficient nicely collapse. Accordingly, the frictional critical line φ c (Pe, μ) coincides with the frictionless one for Pe < Pe * (μ), while conversely it deviates from it. We confirm this expectation in Fig. 4(b), which illustrates that the distance between the frictionless and the frictional critical lines, φ(Pe, 0) − φ c (Pe, μ), scales as Pe/Pe * (μ).
V. DYNAMICS IN THE PHASE SEPARATED PHASE
Friction significantly stabilizes phase-separated configurations, thus expanding the coexistence region in the φ-Pe plane. To understand how friction stabilized an active cluster, we start by considering a frictional simulation for parameter values at which phase separation occurs, Pe = 500, φ = 0.2, and μ = 0.9. Also, for ease of visualization, we consider a small two-dimensional system, with N = 500 particles, so that in the steady state we readily observe the formation of a single cluster. This is illustrated in Fig. 5(a). In this and the other panels of Fig. 5, we also illustrate the active velocity field evaluated on a square grid with lattice spacing ∼1.5σ . We associate to each grid point the average active velocity of the particles in a circle of radius 2.2σ . In the figure, we show the values of the field on the grid points that average over at least five particles. We use the configuration Fig. 5(a) as the initial configuration of two different simulations, a frictionless (μ = 0) and a frictional one (μ = 0.9). . In all plots, the central black circle identifies the position of the particle closer to the center of mass of the cluster, in the initial configuration. We emphasize the rotational motion of the cluster, drawing a line connecting the central particle and another particle of the cluster. Both with and without friction the cluster rigidly rotates around its center of mass. In the absence of friction, the rotation of the cluster makes the active velocities parallel to the cluster surface (b) inducing a cluster instability (c). We see in panel (f) that, in the absence of friction, as the cluster rotates, the average interaction force that contrasts the motion along the direction of the self-propelling forces decreases, fluctuating around a value characteristic of the homogeneous phase once the cluster breaks. In the presence of friction, the cluster rotation induces that of the self-propelling directions, and the cluster remains stable (d, e). For these illustrative two-dimensional simulations N = 500, Pe = 500, φ = 0.2, and μ = 0.0, 0.9.
Figures 5(a)-5(c)
show that in the absence of friction the cluster becomes unstable and disintegrates upon rotation. The instability occurs as the rotation makes the self-propelling velocities tangential to the cluster surface. We have verified that the system becomes macroscopically unstable by investigating the magnitude of the forces which oppose the motion of the particles in their self-propelling directions F (t ) = − 1 NF a d dα U (r(t ) + αn)| 0 , where U is the elastic energy of the system andn is the director of the active velocity field, normalized by the magnitude F a of the active force acting on each particle and by the number of particles N. Figure 5(f) shows that F (t ) quickly decreases as the cluster rotates, reflecting the development of the instability.
Friction promotes phase separation by suppressing this rotational-induced instability of active clusters. This is illustrated in Figs. 5(a), 5(d), and 5(e). The frictional cluster is stable because its rotation induces that of the selfpropelling directions of its particles. Indeed, we observe in Figs. 5(a), 5(d), and 5(e) the active velocities always point towards the center of the cluster. This tendency is more pronounced the higher the frictional forces, and hence at large μ and large Pe. As a consequence of this process, while in the absence of friction active clusters quickly disintegrate as they start rotating, in frictional ABPs one observes long-lasting active clusters that perform many revolutions before eventually breaking apart. In this respect, frictional ABPs behave as active dumbbells [14,15].
From the decay of F (t ) it is possible to extract the lifetime of the considered cluster. For the cluster illustrated in Fig. 5, This lifetime is 0.12D 0 r in the absence of friction, as apparent from Fig. 5(f). For this cluster, we have observed the lifetime quickly grow as the friction coefficient increases, reaching values beyond our simulation capabilities for μ 0.2. This result strongly suggests that frictional clusters break as thermal fluctuations succeed in inducing the relative rotation of the contacting cluster.
VI. DISCUSSION
We have investigated the stability phase diagram of frictionless active Brownian particles and rationalized its spinodal line within a kinetic model. In this model, the finite value of the lower-spinodal line in the Pe limit, in the absence of friction, results from the competition of two processes. Phase separation is promoted by the collisions of the particles in the homogeneous phase, which may trigger the agglomeration. Being related to the particle velocity, this process leads to a flux of particles j g ∝ Pe from the dilute to the dense phase. The dilute phase is promoted by the process that allows particles to resolve their collisions. At low Pe, particles mainly resolve their collision by rotating their self-propelling direction. In the high-Pe limit we are interested in, conversely, particles resolve their collisions by sliding past one to the other, as illustrated in Fig. 1, before their self-propelling direction changes. This sliding-detaching mechanisms, also driven by the motility, leads to a flux of particles from the dense to the less-dense phase, j sd ∝ Pe. Since both j g and j sd are proportional to the Péclet number, the spinodal line results tend not to depend on it.
Within this context, the role of friction is rationalized considering its influence on these contrasting fluxes. In Sec. III B we demonstrated that friction influences the homogeneous phase by increasing the rotational diffusivity. This increase does not affect the typical velocity of the particles and hence the flux j g . On the other hand, we have found friction to have suppressed the ability of two particles to slide one past the other, to resolve their collision, in Sec. III A. Similarly, in the phase-separated phase, friction stabilizes an active cluster, which would conversely disintegrate upon rotation, as discussed in Sec. V. The balance between j g and j sd , therefore, leads to a friction-dependent spinodal line with the coexistence region widening on increasing the friction coefficient.
That the suppression of the sliding detaching mechanism leads to a widening of the coexistence region is consistent with previous findings. In the context of frictionless spherical particles, the sliding detaching mechanisms are suppressed when the dynamics is investigated via Monte Carlo simulations, which are unable to account for the cooperative displacement of colliding particles. Consistently, in these simulations the spinodal line is found to vanish in the Pe → ∞ limit [16,17]. The sliding detaching mechanisms are also suppressed in the system of active anisotropic particles. Indeed, these particles cannot rotate independently when in a dense cluster, which implies that an active cluster of anisotropic particles does not destabilize when rotating, as in frictional spherical particles (see Fig. 5). Consistently, the spinodal line of frictionless dumbbells does also vanish [14,15] in the Pe → ∞ limit.
Interestingly, we notice that long-lived rotating clusters have also been observed in experiments of active thermophoretic particles [9][10][11]. These clusters might perform several revolutions before restructuring. While it is understood that these clusters might be stabilized by the attractive phoretic attraction between the particles [10,11,26], it has been suggested that this attraction is not always present [9]. In these circumstances, friction might be a concurring stabilizing factor, as one could experimentally ascertain investigating whether the self-propelling directions of the particles rotate with the cluster itself.
Regardless, frictional forces could be enhanced by acting on the roughness of the particles [4], suggesting that friction could be used as a control parameter to experimentally tune the motility-induced phase diagram.
|
2020-10-19T18:10:15.178Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7777f6996c11181a1a34a806a35a29c502c98be5",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/135531/2/PhysRevE.102.032612.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ae6388484b0de374ec234601e0e278cad54ee61d",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
78874007
|
pes2o/s2orc
|
v3-fos-license
|
Family-witnessed resuscitation: focus group inquiry into UK student nurse experiences of simulated resuscitation scenarios
Aims To describe the impact of family members’ presence on student nurse performance in a witnessed resuscitation scenario. To explore student nurses’ attitudes to simulated family-witnessed resuscitation and their views about its place in clinical practice. Background Family-witnessed resuscitation remains controversial worldwide. Hospital implementation remains inconsistent despite professional organisation support. Systematic reviews of international literature indicate family members wish to be involved and consulted; healthcare professionals express concerns about being observed while resuscitating. Student nurse perspectives have not been addressed. Design Qualitative, focus groups. Methods Participants: UK university second-year student nurses (n=48) who participated in simulated resuscitation scenarios (family member absent, family member present but quiet or family member present but distressed). Data generation 2014: focus group interview schedule—five open-ended questions and probing techniques. Audio recordings transcribed, analysed thematically. Research ethics approval via University Research Ethics committee. Findings Overarching theme=students’ sense making—making sense of situation (practically/professionally), of themselves (their skills/values) and of others (patients/family members). Students identify as important team leader allocating tasks, continuity of carer and number of nurses needed. Three orientations to practice are identified and explored—includes rule following, guidance from personal/proto-professional values and paternalistic protectionism. Discussion We explore issues of students’ fluency of response and skills repertoire to support family-witnessed resuscitation; explanatory potential to account for the inconsistent uptake of family-witnessed resuscitation. Possible future lines of inquiry include family members’ gaze as a motivational trigger, and management of guilt.
ABSTRACT Aims To describe the impact of family members' presence on student nurse performance in a witnessed resuscitation scenario. To explore student nurses' attitudes to simulated family-witnessed resuscitation and their views about its place in clinical practice. Background Family-witnessed resuscitation remains controversial worldwide. Hospital implementation remains inconsistent despite professional organisation support. Systematic reviews of international literature indicate family members wish to be involved and consulted; healthcare professionals express concerns about being observed while resuscitating. Student nurse perspectives have not been addressed. Design Qualitative, focus groups. Methods Participants: UK university second-year student nurses (n=48) who participated in simulated resuscitation scenarios (family member absent, family member present but quiet or family member present but distressed). Data generation 2014: focus group interview schedule-five open-ended questions and probing techniques. Audio recordings transcribed, analysed thematically. Research ethics approval via University Research Ethics committee. Findings Overarching theme=students' sense making -making sense of situation ( practically/professionally), of themselves (their skills/values) and of others ( patients/ family members). Students identify as important team leader allocating tasks, continuity of carer and number of nurses needed. Three orientations to practice are identified and explored-includes rule following, guidance from personal/proto-professional values and paternalistic protectionism. Discussion We explore issues of students' fluency of response and skills repertoire to support familywitnessed resuscitation; explanatory potential to account for the inconsistent uptake of family-witnessed resuscitation. Possible future lines of inquiry include family members' gaze as a motivational trigger, and management of guilt.
BACKGROUND
There is over 30 years of evidence supporting family-witnessed resuscitation (FWR), yet it continues to be controversial around the world. [1][2][3][4] Family members' (FMs') presence during resuscitation is supported by professional organisations such the US Emergency Nurses Association 5 and joint European nursing organisations, 6 yet FWR is not a global normative practice. 7 Evidence challenges speculations about effects on families. A recent multicentre randomised controlled study examines whether FWR reduces the likelihood of post-traumatic distress symptoms 8 and considers implications for medical efforts during resuscitation, effects on teams and any legal claims. In total, 8 of 15 French prehospital emergency medical units (EMUs) were randomly assigned to an intervention group, the remainder were controls. FMs were asked if they wished to be present during resuscitation (n=266); families in control EMUs were not offered this option (n=304). Intervention group FMs observed resuscitation in their home. Control group families did not observe resuscitation. Telephone interviews took place 90 days post event using an Impact Event Scale and Hospital Anxiety Scale, emergency medical team stress measures, observed FM response and behaviour during resuscitation, and complaints/medicolegal claims. Post-traumatic distress symptom frequency was significantly higher in the control (adjusted OR 1.7; 95% CI 1.2 to 2.5; p=0.004) and for FM absent during resuscitation (adjusted OR 1.6; 95% CI 1.1 to 2.5; p=0.02). Families did not interfere with medical efforts during FWR, raise resuscitation team emotional stress or make more legal claims.
Other work indicates that patients and FMs want FWR available. [9][10][11][12][13] Parents of children being resuscitated indicate they want to choose whether or not to be present. They do not want healthcare staff making the decision alone. 14 Where FMs attend FWR, 94% indicate they want to be present again. 12 14 In contrast, messages about FWR from healthcare providers are inconsistent. Between 7% and 96% of healthcare staff favour FWR, 12 13 and attitudinal surveys indicate it is perceived to be a good thing. 4 There is geographic variation; studies from Belgium, Germany, Singapore and Turkey indicate greater concerns about FWR compared with UK, Irish, Australian and US studies. [15][16][17][18][19][20][21][22] The reason is unclear and may be contextual-for example, individual predisposition to FWR, cultural differences, educational preparation, rural versus urban location and healthcare delivery structure. 23 Healthcare practitioners with FWR experience are more positive than those without it, 4 14 24 but regardless of FWR exposure, practitioners want to retain overall final control. 12 13 Salmond et al's 25 systematic review identified perceived advantages/disadvantages of FWR for patients, families and providers. FWR is perceived to help families understand the situation's seriousness, maintain their patient connection and demonstrate that staff have done everything possible. 11 Witnessing resuscitation is distressing but considered to be a good thing because it may help FMs come to terms with death and reduce pathological grief. 8 13 26 However, concerns remain about FM presence adding to practitioner performance anxiety, limiting coping strategies and interfering with care delivery. 12 27-29 These continue despite evidence that families do not usually interfere with resuscitation, and experienced practitioners' performance is usually unaffected. 12 14 This last issue is of relevance to nurse educators. First, student nurses respond appropriately when resuscitation is indicated; second, students deliver appropriate care to the level of their ability; and finally, they are prepared for situations they will meet once they are registered nurses (RNs).
Student nurses are partially socialised into the practice world and are not expected to fully conform to norm values. They have potential to produce distinctive insights into the impact of FM presence during FWR. Student nurses are often first responders at UK hospital cardiac arrests, and our interest in FWR stems from our desire to explore the ways students make sense of clinical situations and develop skills for dealing with real-world problems. In particular, we are interested in how educators may use high-fidelity simulated environments to access difficult clinical situations to explore/develop student competence (cognitive, functional, ethical and personal competence 30 in FWR and overcome real-world ethical constraints. Using simulated environments allows us to explore student nurses' views about FWR and identify ways to support their transition to RNs. This paper reports on the qualitative arm of a mixed-methods study which included a randomised controlled trial. 31 The overall design is reported elsewhere. 32 The trial took place in a high-fidelity cardiopulmonary resuscitation (CPR) scenario in a UK university nursing department skills lab. Seventy-nine second-year adult nursing students were recruited via email, and randomly allocated to one of the three scenarios-FM absent, FM present but quiet and FM present but distressed. Students worked in teams of 3-4 and responded to a standardised preprogrammed manikin, simulating events requiring CPR. Actors portraying FMs of both genders were provided with a script and each manikin had an actor voice-over.
METHODS
Audio-recorded qualitative data were captured through four postscenario focus groups facilitated by GK and JA, experienced researchers trained in focus group techniques. A five open-ended questions interview schedule elicited experiences about the simulated cardiac arrest scenarios, focusing on how they felt they managed/responded. Probing techniques confirmed understanding. Contemporaneous notes were taken around specific points. 33 Of the 79 students who took part in the CPR scenario, 48 students elected to take part in the focus groups. These were classroom based and lasted ∼60 min each. GK, JA and DP transcribed and analysed audio recordings. Transcript samples were assessed for veracity.
Data analysis
Thematic analysis of focus group transcripts was carried out independently by GK, JA and DP using qualitative data analysis software (QDA Miner Lite). The final version of findings was developed from postanalysis reviews using a constant comparative thematic technique once saturation was achieved. 31 Final findings were agreed by group consensus to ensure rigour. Transcripts were not returned to participants.
Ethics
The Code of Ethics of the World Medical Association (Declaration of Helsinki) was followed. Research ethics opinion was secured from the University Ethics Committee; written and verbal consent was obtained from focus group participants beforehand. All students were made aware of their rights of anonymity and confidentiality, withdrawal at any time, and that anonymised data would be published.
FINDINGS
The overarching theme was sense making, with three subthemes: making sense of the situation ( practically and professionally), making sense of themselves (skills and values) and making sense of others ( patients and FMs).
Sense making: situation-practically
Students compared their FWR scenario experience with their skills laboratory clinical simulation experience and previous clinical experience. Their simulated FWR scenario experience was real and powerful. They related it to their clinical practice CPR experience, and their knowledge/understanding of how hospital clinical environments operate. Participants perceived clinical simulation to be useful for their learning. Activities carried out in simulated learning environments gave them confidence to act. They synthesised simulated clinical experience with real clinical experience, emphasising the importance of team leaders allocating roles/tasks necessary for successful CPR.
M: No, I think because you took the handover and then they said, 'Right, let's split this up. Right! Airway, breathing'. So, somebody took control. K: Yeah, I thought it was very controlled. Interviewer: …and was that your experience that it was controlled? K: A lot of what we did was controlled. (Focus Group 2) Where FMs were present, students spoke of the need for continuity of care to build trusting relationships at difficult times. Reflecting on their CPR experience (simulated/real), they identified three nurses as the minimum necessary to care for FMs without compromising patient safety (four nurses reduce resuscitation team strain) and prioritised associated actions/tasks. S: We were quite lucky because with ours, we had four people in our group. So if we had less, it would have affected CPR. B: We could spare somebody to go out. If you have got two of you, one doing chest and one doing the air bagging, where is the spare person to go out and inform the relative? V: Yeah, because at one stage we had two; we had Rachel outside the room and we were still able to do it. (Focus Group 3)
Sense making: situation-professionally
We identified three main currents in students' drive to make sense of the situation from a professional perspective. These currents do not necessarily match the specific scenario students encountered and seem to reflect an emerging professional nursing orientation. The first current is characterised by adopting a rulefollowing orientation-doing whatever guidelines advise regardless of its relevance, disengaging from personal and professional autonomy and subsuming oneself to the will of an omniscient other.
A:This is what I mean. I wouldn't want to make the decision unless there was like a national guideline, or nurses have the right, or nurses do not have the right, or the decision is given to the patient or the relative. I would follow whatever that guideline was obviously… B: …But who would make the guideline? A: Well exactly, who makes the rest of them? (Focus Group 2) The second current is characterised by using personal and proto-professional values for guidance. These include people's rights to choose and express choice; people's autonomy over their bodies; health professionals seeking consent from people when giving care, and acknowledging possible tensions between relatives' rights and individual patient rights.
T: We offered him [relative] a chance to come in. I think at first, when we were doing observations and all that, we kind of went there and checked. When the situation changed I went out and informed, give him a chance to see if he wanted to come into the room and see the whole thing but he was all right. He just said, 'I don't want to get in your way', and I just went back and said, 'You are not getting in my way or anybody's way if you really want to you can just come in'. So I think the opportunity was there. He was offered the opportunity if he wanted to come into the room, but it was his choice again, yeah… (Focus Group 1) The third driver was a desire to assert paternalistic protectionist rights as a professional in order to command and control events, processes and care environments. That's what they are trained to do and it's at that point they say, 'I don't think it's right', or 'It's not, you know, it's not right for whatever reason', then I would respect their… You know, it's like in the courts, they make good decisions and bad decisions but at the end of the day you just have to accept that they are the professionals and they make the decision if someone is guilty or not guilty and you just have to respect that. I mean it's the same in the healthcare profession, where we are trained to do what we do and if we don't think something is right, then we should say that it's not right. (Focus Group 2)
Sense making: self-skills
Working under FMs' gaze was unsettling for some students. This uneasy feeling appeared to be linked to two related aspects -first, they anticipated FMs' criticism of their work and caring style during CPR; second, they feared being found out as fake unskilled professionals. They were anxious that FMs would blame them for resuscitation failure, for patient death, of the realities of accountability and being called to account in a law court. This anxiety was linked to feeling self-conscious. They made assumptions about FMs' feelings, assumed these assumptions were real and used them to inform their actions/plans. Other students experienced events differently and found working under FMs' gaze challenging but stimulating. They viewed it positively, felt more aware of the situation wanting to raise their standards, and for FMs to see that everything was done. Simulation led some students to experience guilt when they realised their omissions. They gained insight into possible future actions and used the experience to anticipate different action strategies. E: I feel guilty now that I didn't actually talk to the relatives now, and knowing that, it shows how easy they can be forgotten when they are not in the room. (Focus Group 1) Many students spoke of the simulation scenario positively, but for some the simulation scenario structure hindered their performance, they were unsure what to do and felt powerless. They noted how scenarios were different from real life, and their actions/plans did not fit the scenario. V: Yeah, because we were working as a team-like you were doing the compressions, and you were doing the compressions, me and Liz were swopping over doing the um… T: …do you think that resus is already set up it stalled you because we were a bit like that weren't we? Because we were like, 'Blood pressure', 'No! His blood pressure is already on!' So it kind of like stopped us from going. Whereas maybe if it was from scratch, we might have all been on the ball. (Focus Group 1) Despite this, FM presence/absence in the scenario was noticeable when they discussed their experiences. Where an FM was present, students were concerned about being asked questions they could not answer and they anticipated unpredictable FM behaviour. Students feared FM behaviour that would be difficult for them to manage, that is, no eye contact/talking. Sh: We asked her if she wanted to leave, that lady; but I tried a bit, but she refused didn't she? She said she wanted to stay… E: …you took the role of looking after the relative but she kept speaking to me. Sh: Yeah… it was like she didn't really comply with the situation very well, which was true; which reflects probably what would happen in real life… (Focus Group 3) Where an FM was absent, students talked about the experience in a calm, controlled way. They described how leaders directed their actions, divided up tasks easily, focusing on technical/technological care components. Where there was a calm FM present during resuscitation, students noted the calmness of their CPR. B: I don't think so, no… T: …because he was quite calm and quiet, we stayed calm and quiet. So I don't know whether that would affect if we had the relative that was hysterical. Y: I think at one point I was quite aware that I was standing quite close to her. So I didn't actually, when he stopped breathing, I didn't realise I had my back to her because she was so quiet. And I turned round and said, 'Sorry, are you all right?' (Focus Group 3)
DISCUSSION
Using simulated healthcare environments for educating student nurses means life-like scenarios can be created in which students practise, learn and make mistakes safely without harming patients. 30 For many participants, simulated FWR scenarios are realistic and powerful, unlike other skills development sessions. Simulation echoed their real-world CPR experience and resonated with their knowledge/understanding of how hospital clinical environments operate. This helps us listen to them with some confidence that their actions mirror their behaviour in real-world settings. We can hear them emphasise the importance of team leaders allocating roles/tasks necessary for successful CPR. Where FMs were present, we can hear the need for continuity of carers for FMs and implications for the number of nurses needed for effective resuscitation, which has implications for clinical practice. However, not all students spoke positively about simulation because the scenarios were obviously different from real life, and their actions/intended actions did not fit.
We identified three emerging currents in professional orientation regarding students' willingness to engage in FWR-rule following, guidance from personal and proto-professional values and paternalistic protectionism. It can be argued that to care for patients in a safe, efficient, effective and equitable way, RNs must be able to exhibit all three currents of behaviour at different times depending on the situation faced. 34 Nurses should deploy different behaviours rather than apply the same behaviour regardless of the situation. [35][36][37] From our perspective as educators, there is a challenge to help students develop response fluency and build relevant skills repertoires 37 to care for patients safely.
Student behaviour may be linked to FM gaze and the anxiety and uncertainty evoked. Anticipating criticism, and fear of being found out as unskilled, connects with feelings of selfconsciousness, reinforces the assumption of their validity and leads to a focus on technical/physical patient care to the exclusion of FMs and their needs. This may have implications for family grieving and raise the incidence of pathological grief reactions. Increased simulation use may help future RNs cope with an increased public demand for transparent healthcare delivery played out as 'gaze'. This may be worth exploring with RNs to gauge its explanatory worth when examining the inconsistent uptake of FWR. 25 Other students experienced FM gaze as challenging but stimulating. The gaze was used as a motivational trigger to raise standards and transparency so FMs could see that everything was carried out to save the patient. This may support healthy grieving and protect families from pathological grief experiences. Simulation's potential to generate new learning can be seen in students who experienced guilt on realising the gaps in their previous real-world resuscitation events. This exercise helped them achieve insight into different future action strategies. While simulation is safer for patients, educators must be watchful for these responses so that insights may be channelled for positive outcomes.
Students' views about FWR vary despite their exposure to relevant theoretical knowledge and experiential learning in practice which reflects Paplanus et al's, 12 and Rittenmeyer and Huffman's 13 work. Some students perceive FWR to be a good thing echoing Chapman et al, 4 but this is countered by others who consider it a barrier to providing safe patient care. Few students had directly experienced FWR, and exposure does not seem to influence their wish to retain overall final control. 4 12-14 Further work is needed to examine how students synthesise theoretical knowledge and clinical experience when formulating attitudes to FWR. There is also scope to explore emotional resonance between students/RNs and patients/FMs in time-sensitive care situations-How is it experienced by nurses, patients and their families? What impact does emotional resonance have on care delivery? What are the implications for delivering safe care? Student concerns about FM presence refer to performance anxiety, effects on coping strategies and possible interference with care delivery, echoing Åsgård and Maindal 28 and Rittenmeyer and Huffman. 13 These fears (also identified in studies with professionals 29 ) appear to continue despite students' experience of simulated FWR regardless of FM presence/ absence. Further work is required to examine how students use lived experience to confirm/disconfirm their FWR views, and how students learn to reflect/deflect emotion in clinical encounters. Carefully designed educational encounters can help prepare nurses and healthcare professionals manage complicated situations. Postsimulation debriefings may also provide an opportunity to examine evidence and explore perspectives from various stakeholders.
CONCLUSION
Systematic reviews of international literature indicate that FMs wish to be involved and consulted in FWR. Healthcare professionals however express concerns about being observed while resuscitating. Until this study, student nurse perspectives have not been addressed, but they are often first responders in hospitals, and this has implications for the quality and safety of care delivered to patients and their families. This study suggests that students' views about FWR vary despite exposure to relevant theoretical knowledge and experiential learning in practice. Few of the students in this study had direct experience of FWR, and exposure to FWR does not seem to influence their wish to retain overall and final control over FWR. Using simulated FWR appears to help students develop cognitive and functional competency in a safe environment.
Findings from this small piece of exploratory work based in one university nursing department must be treated with caution. However, there is scope for a larger project to explore different educational strategies in addressing anxiety when working under the gaze, developing response fluency and harnessing the potential of motivational triggers.
|
2018-12-06T02:13:07.054Z
|
2016-06-24T00:00:00.000
|
{
"year": 2016,
"sha1": "98635428a124bf7b74c0ff52979634ba8ce6c55b",
"oa_license": "CCBY",
"oa_url": "https://uwe-repository.worktribe.com/preview/922491/Pontin%20et%20al%20FWR%20qualitative%20paper%20BMJ%20Sim%20%20Tech%20Enhanced%20Learning%20june%202016.pdf",
"oa_status": "GREEN",
"pdf_src": "BMJ",
"pdf_hash": "16ade38c86c39a198905aa7c28119d46d8022755",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
184486934
|
pes2o/s2orc
|
v3-fos-license
|
The Validity and Reliability of Live Football Match Statistics From Champdas Master Match Analysis System
The aim of the present study was to investigate the validity of match variables and the reliability of Champdas Master System used by trained operators in live association football match. Twenty professional football coaches voluntarily participated in the validation of match variables used in the System. Four well-trained operators divided into two groups that independently analyzed a match of Spanish La Liga. The Aiken’s V averaged at 0.84 ± 0.03 and 0.85 ± 0.03 for the validation of indicators. The high Kappa values (Operator 1: 0.92, 0.90; Operator 2: 0.91, 0.88), high intra-class correlation coefficients (varied from 0.93 to 1.00), and low typical errors (varied from 0.01 to 0.34) between the first and second data collection represented a high level of intra-operator reliability. The Kappa values for the inter-operator reliability of were 0.97 and 0.89. The intra-class correlation coefficients and typical errors ranged from 0.90 to 1.00 and ranged from 0.01 to 0.24 for two independent operators within two data collections. The results suggest that the Champdas Master system can be used validly and reliably to gather live football match statistics by well-trained operators. Therefore, the data obtained by the company can be used by coaches, managers, researchers and performance analysts as valid match statistics from players and teams during their professional tasks and investigations.
INTRODUCTION
Sport performance analysis during actual competition is one of the main sources of information that are beneficial for the training and coaching process (Sampaio and Leite, 2013). In essence, coaches and athletes could be provided with information of interest that is difficult to be detected throughout their subjective perception, and then refine their training planning and improve athlete's performance purposefully (Hughes and Franks, 2015). Within its domain, the analysis of technical and tactical performance has captured substantial research interest in either individual sports (Reid et al., 2016;Cui et al., 2019), or team sports Zhou et al., 2018), where a set of performance indicators that contains relevant information about players and teams' match performance during competition have been established and collected via observational approaches (Hughes and Bartlett, 2002;Lames and McGarry, 2007;Hughes and Franks, 2015). Fundamentally, it is mainly based on systematic observation, understood as an organized recording and quantification of sport behaviors in their natural context (O'Donoghue and Mayes, 2013;Sampaio and Leite, 2013). To achieve this purpose, both a well-designed notational system and valid, precise and objective performance indicators are required so that technical-tactical aspects of match performance could be easily gathered and used for the subsequent analysis and practical applications (Bradley et al., 2007;Carling et al., 2007;O'Donoghue, 2007).
Therefore, as a prerequisite for any performance analysis research that uses novel system or instrument, the repeatability and accuracy of this new tool, and the validity of performance indicators used should be validated, before collecting and analyzing players and teams' performances (O'Donoghue, 2014;Chacón-Moscoso et al., 2018). Despite that currently there are some automatic player tracking system available for performance analysis, most of observational studies or practices in sport field are still done with computerized notational systems where performance analysts are required to manually code sport performance indicators with predetermined short-cut keys (Bradley et al., 2007;Hizan et al., 2010;Liu et al., 2013;Beato et al., 2018). From a theoretical and applied perspective, a performance indicator should help to explain the match outcome; and thus advance understanding, providing for meaningful insights of game behavior (O'Donoghue, 2009). The use of precise operational definitions and the validity of performance indicators are related to reliability of data collection in performance analysis and therefore have a strong impact on the correct interpretation of match performances (McGarry, 2009).
Validity is generally referred to as the ability of a measurement tool to reflect what it is designed to measure (Atkinson and Nevill, 1998), and usually for performance analysis instrument, it can be determined through expert coaches' opinions in each sports category (Hraste et al., 2008;Cupples and O'Connor, 2011;Larkin et al., 2016;Torres-Luque et al., 2018). For instance, Larkin et al. (2016) validated a coding instrument of assessing movement awareness and technical skills for soccer players with a panel of nine experts. Similarly, with the review and confirmation of eleven experts, Torres-Luque et al. (2018) validated an observational instrument for analyzing the technical-tactical match performance in tennis. Further, the reliability of a sport notational system is as important as its validity (Hayen et al., 2007). It refers to the reproducibility of values of a test, assay or other measurement in repeated trials on the same individuals (intra-observer reliability) (O'Donoghue, 2009), and repeatability over different observers (inter-observer reliability) (Hopkins, 2000). Sports notational system may be limited in reliability due to manual errors, observer's inexperience, number of observers (Beato et al., 2018) so that its results will mislead coaches or performance analysts to make poor decisions about training and match preparation.
In recent years, the development of semi-automatic match analysis systems in elite football has enhanced the accessibility to match information related to match events and movements (Carling et al., 2007;Mackenzie and Cushion, 2013). And as a result, performance directors, coaches and researchers frequently utilize these systems to gain insights into football match performance. Concurrently, the accuracy and reliability of some widely used systems from various commercial football match statistics providers have been validated and verified, such as AMISCO R system (Carling et al., 2008;Zubillaga et al., 2009;Lago et al., 2010;Dellal et al., 2011;Castellano et al., 2014), PROZONE Sports Ltd. R (Bradley et al., 2007(Bradley et al., , 2011Castellano et al., 2014), SportsCode (Hughes, 2004;Reed and Hughes, 2006;O'Donoghue and Holmes, 2014), OPTA Sportsdata (Oberstone, 2009(Oberstone, , 2010(Oberstone, , 2011Liu et al., 2013), SICS (Rampinini et al., 2009;Osgnach et al., 2010;Beato and Jamil, 2017), Dartfish (Eltoukhy et al., 2012;Padulo et al., 2015;Larkin et al., 2016;Li et al., 2016), and Nacsports (Clear et al., 2017). Indeed, these systems have presented both coder-friendly operating platform and high reliability in measuring technical-tactical performance indicators. Yet, there remain some limitations concerning the above-mentioned studies and measures. Most of them only studies the test-retest reliability of these systems, not considering the content validity of performance events or indicators included. Besides, some systems were mainly focused on the successful or unsuccessful outcome of technical performance events, such as shoots, dribbles, crosses (Larkin et al., 2016), so that tactical information related to pass directions and network is not able to be gathered, which helps to understand complex match characteristics of this invasion sport and becomes one of the recent research topics for football investigators (Gonçalves et al., 2017;Pollard, 2019;Praça et al., 2019).
In comparison with the previous existing systems, Champdas Master System, a semi-automatic match analysis system developed by the Champion Technology Co., Ltd., (a leading Chinese sport data company founded in 2004), has been employed to provide match data services for the majority of professional teams from Chinese Football Super League (first division) and Chinese Football Association China League (second division), China National Youth Super League (U13-U19 divisions), China Men's National Team, clubs from Korean K-league (first division). Meanwhile, the company has also been cooperating with major Chinese online sport video media (PPTV), by collecting, storing, analyzing and visualizing professional football match data of the first leagues in Korea, Spain and United Kingdom (first leagues) during online match broadcasting. In a word, most of match reports and analyses provided by the system are widely used by Asian professional football clubs, coaches, media, and governing organizations. Furthermore, what stands the system out is that it not only allows for common match performance indicators as its peers do, but also includes a more complex classification of players' passing directions, which is seldom found in other systems. As noted above, passing behaviors reveal great extent of tactical information about football, because some teams try to create opportunities by long direct passes, whereas other teams are characterized by possession-style type of play (Larkin et al., 2016;Goes et al., 2019). Players are most likely more successful when passing the ball backward or sideways than attempting forward passes, but the latter has been regarded as a key performance indicator when evaluating penetration of offensive actions and assessing players' performance (Goes et al., 2019). However, given a wide range of professional leagues, clients and audience it serves, little is known about whether match performance indicators used in the Champdas Master System are valid and the live match data collecting process is reliable among its trained operators.
Consequently, it is imperative to execute a thorough validity and reliability analysis of the system, so that its statistics would be trustworthy for research, coaching and broadcasting purposes. Therefore, the present study was aimed: (i) to identify the validity of match performance variables used by Champdas Master System; (ii) to verify the intra-and inter-operator reliability of Champdas Master System used by well-trained operators to collect match statistics in live association football match.
Validity of Performance Variables Used by Champdas Master System
At the first stage of the study, a panel of experts constituted by 20 coaches or assistant coaches from China, Spain, Portugal, Germany, and Ireland voluntarily participated and completed the questionnaire that was aimed to validate the performance variables used in Champdas Master System. The inclusion criteria of the coaches were as follows: (i) Having coached professional teams of a level equivalent to the first and second divisions in Asian Football Confederation (AFC) or Union of European Football Associations (UEFA) or coached in a semi-professional level, equivalent to AFC or UEFA third division; and (ii) Owning coach licenses equivalent or higher than AFC-B or UEFA-A level. The participating coaches had an average coaching experience of 13.3 ± 7.1 years, and among twenty of them, five had UEFA-Pro license, five had UEFA-A license, nine had AFC-A license and one had AFC-B license. Prior to the filling of questionnaire and sign the voluntary informed consent, the study purpose and the anonymously academic use of their answers were explained to each coach.
Performance Variables and Operational Definitions
The questionnaire was based on performance variables used by Champdas Master System and they were divided into three domains: (i) Attacking-related performance; (ii) Passingrelated performance, and (iii) Defending-and-Goalkeeper-related performance. The criterion of selecting these variables were mainly based on two categories of existing literature, namely, studies that examined the validity of other match analysis systems and analyzed variables (Valter et al., 2006;Bradley et al., 2007;Liu et al., 2013;Castellano et al., 2014;Beato et al., 2018); and studies that focused on tactical patterns and passing behaviors of football players (Rein et al., 2017;Goes et al., 2019;Pollard, 2019). Within all variables, the inclusion of two variables should be highlighted. First of all, considering the width of the pitch and different pitch paths (left, center, and right) and their actual usefulness in interpreting offensive behaviors (Sgrò et al., 2017), we included the "attacking shift" as a performance variable, revealing a quick ball transition from side to side. This corroborates with the findings of Rein et al. (2017) in that more successful teams could increase space control in the attacking zone through passing, creating defensive disadvantages for the opposing teams. Secondly, passing directions were established calculating the angles from current passes to the next events in relation to the sideline and attacking direction (see Figure 1; Serpiello et al., 2017;Goes et al., 2019).
Attacking shift
The situation takes place in midfield or attacking third where the attacking players take the initiative to transfer the ball from one sideway to the other (done with no more than three passes), for creating better attacking space.
Enter into attacking third
(Pitch is divided into three zones, including attacking third, middle third and defending third. Enter into offensive zone means entering into attacking third). It includes the following conditions: (1) After players enter into the attacking third, a transition of ball possession is realized (including possession gained by defensive actions or opponent's turnover); (2) A dead ball occurs.
Dribble
A dribble is an attempt by a player to beat an opponent in possession of the ball. A successful dribble means the player beats the defender while retaining possession; unsuccessful ones are where the dribbler is tackled.
Shot
An attempt to score a goal, made with any (legal) part of the body, either on or off target. The outcomes of shot could be: goal, shot on target, shot off target, blocked shot, post.
Shot on target
The definition of a "shot on target" or a "shot on goal" (SOG) is goal or any shot attempt to goal, which required intervention to stop it going in or resulted in a goal if left unblocked.
Possession gained
The total number of possession regained by active defending (tackle, interception), or by passive recovery (ball cleared by the opponents).
Aerial Duel
Aerial Duel can be also called as heading duel. Two players competing for a ball in the air, for it to be an aerial both players must jump and challenge each other in the air and have both feet off the ground. The player who wins the duel gets the Aerial Won, and the player who doesn't get an Aerial Lost.
Passing-Related Performance Pass
Any pass attempted from one player to another. Excluding free kicks, corners, throw-ins, and goal kicks.
Successful pass
Any pass successfully reached from one player to another. Excluding free kicks, corners, throw-ins, and goal kicks.
Forward pass
Forward pass (The angle between pass direction and the parallel of sideline is <15 • ).
Through ball
Ball passed through the last line of defense.
Lateral pass
The lateral pass to the left or right (the angle between pass direction and the parallel of goal line ≤15 • ).
Diagonal pass
The pass with the angle between pass direction and the parallel of
Backward pass
The pass with the angle between backward pass route and side line ≤75 • .
Long pass
The distance of pass >25 m.
Short pass
The distance of pass ≤25 m.
Assist
The final pass or cross leading to goal-scoring.
Consecutive pass
The total number of passes that a team realizes without losing ball possession or causing a dead ball.
Key pass
The final pass or cross leading to the recipient of the ball that attempts to make a goal, but fails.
Cross
Any pass that delivers the ball into the penalty area by the attacking team, from lateral areas of the attacking third (not played inside of the penalty area).
Defending-and-Goalkeeper-Related Performance Tackle
When the opposing player is in possession of the ball, but has no intention to pass, the defending player acts dispossessing the ball. A tackle won is given during following conditions: when a player makes a tackle and possession is retained by either himself or one of his teammates; when the tackle results in the ball leaving the field of play.
Interception
When a player intercepts any pass event between opposition players and prevents the ball reaching its target. Note: Defending player should be close to the receiver.
Clearance
Player under pressure gets the ball clear of the danger zone or/and out of play. If the ball is intentionally played into another teammate, it is not considered as a clearance, but a pass.
Blocked pass
Similar to interception but the opposing player is already very close to ball and successfully block the pass from passing player. It usually happens when player consciously or unconsciously blocks the pass immediately when the ball is attempted from one player to another.
Blocked shot
A defensive block, blocking a shot going on target. This must be awarded to the player who blocks the shot.
Save
The goalkeeper prevents the ball from entering the goal with any part of his body.
Punch
The goalkeeper punches/hits any ball played into the box.
Deflected save
When goalkeeper saves a shot, but does not catch the ball.
Keeper sweeper
When the goalkeeper runs out from the goal line to either intercept a pass or close down an attacking player.
Questionnaire Design and Quantitative Evaluation
In order to make sure that variables were precisely defined and validly represented certain aspect match behavior (O'Donoghue, 2007), the questionnaire was designed to quantitatively evaluate: (i) the level of correct definitions of the performance variables, and (ii) the level of variable pertinence to match behaviors. This evaluation was comprised of a scale from 1 to 10, and an example of the questionnaire is presented in Table 1. There was no time limit to complete the questionnaire and the average time the coaches used to fill out the questionnaire was 20 min.
Afterward, their answers were collected and analyzed to calculate the content validity for each variable. To achieve this, the Aiken's V coefficient of each item and its respective 95% confidence interval were used (Aiken, 1980;Penfield et al., 2004).
Dribble
(1) Definition: A dribble is an attempt by a player to beat an opponent in possession of the ball. A successful dribble means the player beats the defender while retaining possession; unsuccessful ones are where the dribbler is tackled Poorly defined 1-2-3-4-5-6-7-8-9-10 Correctly defined (2) Pertinence: Does this variable seem pertinent to the match performance?
None 1-2-3-4-5-6-7-8-9-10 Maximum The magnitude of this coefficient was from 0 to 1, with 1 being the greatest possible magnitude that indicates a perfect agreement among the judges regarding the highest validity score "10" of the contents evaluated in the scale. An item was determined to be valid if its Aiken's V coefficient exceeds the exact critical value calculated by the following formula that takes into account the number of judges and items in the questionnaire sample was used (Aiken, 1985): where z is the level of significance, m the number of items that the experts should evaluate, n is the number of expert judges that participate in the study, and c the maximum value that can evaluate an item. The exact critical value was then calculated to be 0.52 via the formula at a statistical significance level of p < 0.05.
Intra-and Inter-Operator Reliability Test for the Champdas Master System
Champdas Master System is a computerized match analysis system developed by Champion Technology Co., Ltd., to generate live match statistics for professional association football matches. Any performance analyst using the System has to firstly go through a rigorous learning process to get comprehensively familiarized with the definitions of all match actions or events, live coding mode, hot-keys, on-screen manual positioning with mouse, and movement characteristics (see the illustration in Figure 1). Later, they are required to practice the learned knowledge and skills within various trial matches so as to be capable of collecting formal live match statistics. Main data capture mode combines hot-keys of keyboard and on-screen positioning to represent events and labels. The onscreen positioning functions by using mouse markings on a scaled-down version of football pitch, which is employed for tracking players. The movements of mouse and codes correspond to the actual actions performed by players in the actual match.
Event buttons/labels represent match events that are to be recorded over the course the match. Some events may have multiple levels of information, so that some short-cut keys or combination of keys recording different aspects of the same event are used and synchronized by the system. Two categories of data source are automatically input into the system once manually coding match events. The first category includes corner, free kick and throw-in, etc., events that can be automatically identified given the marking locations on the simulated pitch; the other category is composed of long ball, successful pass, consecutive pass, attacking shift, etc., actions that can be automatically and logically generated from the relationship of players and pitch zones. The corresponding time of the event is also recorded automatically once an event is notated. Meanwhile, the marked event locations would be later integrated to generate additional tactical performance information.
Live Data Collection and Sampled Match
To test the system reliability, four well-trained operators (experience = 1.5, 1.5, 2, and 2 years) from Champion Technology Co., Ltd., collected twice (with an interval of 2 weeks) the match data of the 19th round of Spanish La Liga Santander between Real Madrid and Villarreal contested in January 13, 2018 (Live broadcast from a conventional TV coverage). The number of match used and the coding procedure followed the routine of previous studies that validated similar systems (Bradley et al., 2007;Choi et al., 2007;Liu et al., 2013;Beato et al., 2018).
The operators were separated into two groups and observed the match independently. To be capable of formally operating the system, new operators should take part in a training process that consists of five parts: (i) definition learning, (ii) actions coding, (iii) practice in test server, (iv) played-match coding, and (v) live match coding (Champion Technology Co., Ltd., 2018). Normally, during months of training process, new operators were required to get familiar with all match actions, events and corresponding codes, and gradually develop the accuracy and proficiency in data collection. During the live match coding, there was only one principal operator from each group who was in charge of coding all match events. While the other was responsible for checking the completeness of whole dataset, amending any major inconsistence of statistics wherever needed, and then the final report was usually regarded as one-operator work. Therefore, in the current study, Operator 1 and Operator 2 were used to represent each coding group. A total of 27 players were observed, which included 22 starters, 5 substitutes, and 2 goalkeepers. The data collection was officially authorized and supported by Champion Technology Co., Ltd., and the institutional ethics committee from the Technical University of Madrid approved the study.
After the data collection, the raw data were output into Microsoft Excel with their corresponding timeline. As there were large differences in players' action counts due to different onfield time, the agreement of match actions and events coded by independent operators was analyzed by considering the same three number of groups used during validation stage: (i) attacking related actions: dribble won, dribble lost, corner, attacking shift, possession gained, free kick, goal, header, shot, shot on target, shot off target, shot saved, throw-in, offside, and enter into attacking third; (ii) passing-related actions: pass, successful pass, forward pass, through ball, lateral pass, diagonal pass, backward pass, long pass, short pass, assist, consecutive pass, key pass, and cross; and (iii) defending and goalkeeper related actions: blocked pass, blocked shot, clearance, interception, tackle won, tackle lost, aerial won, aerial lost, yellow card, keeper sweeper, save, punch, and deflected save.
RESULTS
Based on the evaluation of twenty professional coaches over 31 variables, the result of Aiken's V averaged at 0.84 ± 0.03 for the degree of variable pertinence to match performance and 0.85 ± 0.03 for the correct definition of variable, showing high values in relation to content validity of all variables (see Table 2). Table 3 showed that there were in total 5,430 events agreed by two independent operators within two collections for Real Madrid, and 4,065 for Villarreal. Comparing intra-operator data collections between the first and second collection, the average time difference of event-coding was 0.91 ± 0.94 s for Operator 1 and 0.81 ± 0.88 s for Operator 2, respectively. While average time difference was 0.89 ± 0.88 s between Operator 1 and Operator 2 for inter-operator data collections. Details can be seen from Figures 2, 3. The Kappa statistics for the events of two teams were 0.97 and 0.89, showing a very good agreement between independent operators. Table 4 showed that there were in total 2,619 events agreed by Operator 1 within two collections for Real Madrid, and 1,948 for Villarreal. The Kappa values for the events of two teams were 0.91 and 0.93. While Table 4 showed that there were in total 2,781 events agreed by Operator 2 within two collections for Real Madrid, and 1,953 for Villarreal. The Kappa values for the events of two teams were 0.91 and 0.87. These results demonstrated a very good intra-operator agreement (see Supplementary Tables 1-4). Table 5 shows that the intra-class correlation coefficients (ICC) ranged from 0.98 to 1.00 and the standardized typical errors (TE) varied from 0.01 to 0.15 for different groups of match actions coded by the same operator within two data collections, showing very good intra-operator reliability. The ICC ranged from 0.93 to 1.00, and TE varied from 0.01 to 0.29 for different match actions coded by different operators within two data collections, showing high level of inter-operator reliability. Furthermore, an empirical comparison was made between match statistics provided by OPTA Sports 1 and Operator 1 and 2 from Champdas Master System, concerning the same match events that both systems comprise (see Table 6). It is shown that generally both operators demonstrated an acceptable agreement with OPTA in all compared variables, expect for a slight discrepancy in short passes.
DISCUSSION
This study has examined the validity and the inter-and intraoperator reliability of Champdas Master System operated by different well-trained operators, who were unaware of study purpose. The validation process of this system is important for scientific acknowledgment and credibility. From professional football coaches' evaluation, match variables included in the system had high levels of content validity. Operators separately coded more than 4,000 events in each data collection, which was higher than the values from previous studies applying other systems such as Prozone, OPTA and Digital.Stadium (Bradley et al., 2007;Liu et al., 2013;Beato et al., 2018). Moreover, results reported in this study showed that Champdas Master System had high levels of absolute and relative reliability. This reveals that the system is capable of measuring football match events reliably and provide more technical-tactical performance details than its peer systems. A practical measure with high validity must have high reliability in the meantime. While a measure with high reliability may have low validity. But only the valid and reliable performance indicators can be reliable used in sports performance profiling (O'Donoghue, 2007;McGarry et al., 2013). Therefore, the study initially examined the validity of performance indicators by evaluating experts' opinions according to the previous literature (Trniniae et al., 2000;Hraste et al., 2008;Larkin et al., 2016). Based on the high values of Aiken's V calculated for twenty professional coaches' responses to the pertinence (0.84 ± 0.03) and definition 1st, First data collection; 2nd, second data collection; CL, 95% confidence limits; ICC, intra-class correlation. 1st and 2nd stand for the first and second match data collections.
(0.85 ± 0.03) of match variables, it was revealed that the variables comprised in the system have appropriate operational definitions and are able to represent match events and player actions during data gathering and analysis. As previously argued, valid operational definition is not sufficient to guarantee a reliable observation, because it is frequent that human error affects the repeatability of the data. (Williams and O'Donoghue, 2006;O'Donoghue, 2007;Beato et al., 2018). Therefore, the current study verified that after rigorous training and large quantity of practice using Champdas Master System, operators could achieve high intra-and interoperator reliability when coding live football match events, and the data provided were reliable, which was supported by the high Kappa values, high intra-class correlation coefficients and low standardized typical errors. The findings were similar to the previously tested Prozone MatchViewer System (Bradley et al., 2007), OPTA Client System (Liu et al., 2013), and Data.Stadium System (Beato et al., 2018). This suggests that the semi-automatic operation errors in collection with the system were extremely limited. Besides, it should be noted that former research focused on the measurements of typical technicaltactical variables that were mainly notated via shortcut keys (Liu et al., 2013). Nonetheless, compared with its counterparts, the current system is operated via both shortcut keys and mouse clicking on a simulated pitch. On the account, the system not only produces more data related to attacking and passing behaviors, but also effectively provides tactical information, considering pitch zones where match performance took place. This allows more comprehensive technical-tactical match statistics for performance analysis and broadcasting purposes.
However, there were several limitations that should addressed. First of all, more matches may be needed to test the generalization of the system and operator reliability. Additionally, it is admitted that occasions of discrepancy existed in some match actions related to passing directions, short/long pass. This phenomenon happens when comparing OPTA Sport and Champdas Master System, as well as assessing intra-and inter operators reliability. This may be explained from two perspectives. Primarily, the inconsistence between OPTA Sport and Champdas Master System in short pass might be caused by different definition of the length. This is supported by the similar total number of passes both systems recorded in the current match. Moreover, after a round of retrospection with participating operators, we were informed that the disagreements in determining the lengths and directions of passes mainly originated from plotting errors and entry errors. These types of errors were due to the manual marking discrepancy when operating on the miniaturized on-scree pitch. For example, for the passing direction related variables, the system would automatically recognize a pass as lateral pass even if a player made a forward pass (the angle between pass direction and the parallel of sideline is < 15 • ) to his teammate who is located at the left front of him. Therefore, if operators could not observe or notate properly similar pass directions on the screen as what are actually taking place on the field, disagreements in passing categories would occur. In fact, this issue was also reported in ProZone MatchViewer system (Bradley et al., 2007;Castellano et al., 2014) and Trakperformance system (Burgess et al., 2006;Edgecomb and Norton, 2006;Carling et al., 2008), which used a similar approach. Furthermore, operator's observation would be affected while gathering data from live TV coverage, as Tenga and Albin (Tenga, 2010) found that camera angles, image sizes and feature film could be blurry when accurate event location is expected. Consequently, more precision problems appear especially when operators are notating positions, distances and angles related match events.
In a word, the largest technical problem was that operators had difficulty in consistently plotting the X/Y coordinates of the events on the miniaturized pitch. In light of these issues, it is suggested that simplification in passing directions be considered. Instead of including diagonal pass (Figure 1), four types of categories could be established with angle interval of 90 • (Goes et al., 2019): forward, left and right sideways, and backward. Meanwhile, automatic player tracking instruments should be further developed and integrated in the data collection process so that operators could avoid subjective determination of directions, lengths and outcomes of passes (Beato and Jamil, 2017). Nonetheless, the current results show that the match statistics could be deemed acceptable so long as operators undergo adequate training and practice in maneuvering the system and identifying specific match events.
The quality of data provided plays a prominent role in performance analysis, sport coaching, media reporting, and scientific research (O'Donoghue, 2009(O'Donoghue, , 2014O'Donoghue et al., 2017). The study reveals a high level of validity and reliability using Champdas Master System to measure live football match statistics. From a theoretical and practical perspective, coaches, sport scientists, and media could benefit from the application of the system to gain reliable technical-tactical performance information. Additionally, future studies could evaluate the generalization of system with a larger sample in similar or distinct leagues, as well as under distinct light conditions.
CONCLUSION
The analysis of football expert panel opinions evidenced the validity of tactical and technical match performance variables from Champdas Master system. Moreover, high Kappa values, high intra-class correlation coefficients and low standardized typical errors demonstrated a high level of intraand inter-operator reliability using the system to collect sampled match events. Although, slight discrepancy was shown in the identification of players' passing directions, our results suggest that the Champdas Master system can be used validly and reliably to collect live football match statistics by well-trained operators. The system and statistics generated could be trustworthy for coaching, academic research and media report.
ETHICS STATEMENT
We state clearly that written informed consent was obtained from the participants of all phases of the study, and the study was approved by the Ethics Committee of the Technical University of Madrid.
AUTHOR CONTRIBUTIONS
M-ÁG, BG, and YC designed the experiments. YC, YG, and QY performed the statistical analysis. BG wrote and revised the manuscript. M-ÁG supervised the design and reviewed the manuscript. All authors have made a substantial and direct contribution to manuscript, and approved the final version of the manuscript.
FUNDING
This work was supported in part by National Key R&D Program of China (2018YFC2000600). BG was funded by the China Scholarship Council (CSC) from the Ministry of Education of People's Republic of China with grant number (2017)3109. YC was supported by the China Postdoctoral Science Foundation and the Fundamental Research Funds for the Central Universities (2019QD033). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
2019-06-12T15:12:51.139Z
|
2019-06-11T00:00:00.000
|
{
"year": 2019,
"sha1": "c2314ca83e699625e915a1d26f79585898f32279",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01339/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2314ca83e699625e915a1d26f79585898f32279",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
221508391
|
pes2o/s2orc
|
v3-fos-license
|
Living with Multiple Sclerosis: A Phenomenological Study of Worries, Concern and Psychological Problems in Iranian Patients with MS
Multiple sclerosis (MS), as a progressive and degenerative illness, has an impact on different aspects of individual lives and may lead to difficulties, concerns, and worries in patients. The aim of the present study was to investigate concerns, worries and problems in patients with MS. We used a descriptive phenomenological qualitative approach. Participants were volunteers purposively selected based on their availability. We carried out deep interviews with 15 MS patients and analyzed the detailed information obtained from these interviews by using Colaizzi’s method. We extracted six essential themes and thirtyfour sub-themes associated with MS from the content of the interviews. The main themes were labelled “Confronting existential concerns,” “Crisis of facing up with the illness,” “Suffering from the illness,” “Relationship,” “Confrontation with spirituality and religion,” “Searching for tranquility.” Results of the present study also reiterated the following: Patients with MS seem to lose meaning of life and this together with problems in dealing with existential concerns, may lead to the “disintegration of self,” hence resulting in considerable psychological disturbance and distress. It is concluded that the illness evokes psychological injury such as existential anxiety, relationship disturbance and hopelessness, and these psychological injuries can lead to relapsing of MS.
in early stage of the illness seems to remain stable throughout the patient's lifetime (Chruzander et al., 2013). Therefore, it is considered a poor prognostic factor (Lanzillo et al., 2016). Patients with cognitive dysfunctions might lose their jobs, might not be engaged in social and vocational activities and might face problems in their daily tasks (Schwartz et al., 1999). In addition, the hypothesis that depression in MS is in part due to neurological consequences of the illness process is supported in the work of Holden and Isaac (2011). Accordingly, Siegert and Abernethy (2005); Holden and Isaac (2011); Bulloch et al. (2015) and many other studies believe that depression as a symptom of illness has a strong impact on MS patients' quality of life.
Therefore, patients with MS are faced with a set of psychological issues because of the nature of the illness. The literature is abundant with evidence of this kind. The range of psychological problems include lack of self-efficacy and motivation (Backus, 2016); limitation in activities of daily living (Khan & Amatya, 2017); decreased sense of self control and control over daily tasks, as well as lack of self-esteem; problems in family and marital relationships and feeling of loneliness (Holden & Isaac, 2011;Olsson et al., 2008); mood disturbances (Lanzillo et al., 2016); behavioral, environmental and employment problems (Concetta Incerti et al., 2015).
A mutual relationship can be found between mental health and physical health such that psychological well-being also has a critical impact on illness (Pompili et al., 2012). On the other hand, there is a physiological link between the nervous system and the immune system. For instance, as believed by Van Der Hiele, Spliethoff-Kamminga, Ruimschotel, Middelkoop, and Visser (2012), moderate chronic stressors in daily life may increase a person's vulnerability to MS inflammations. In addition, "stress" is generally considered as the main risk factor for mental health and plays a pivotal role in development, progression, relapse, and stability of MS (Nasiry Zarrin Ghabaee, Bagheri-Nesami, & Malekzadeh Shafaroudi, 2016).
Because MS mainly begins during early to middle adulthood (Pakniya et al., 2015), and it is most prevalent in middle life (Multiple Sclerosis International Federation, 2013), leading to serious physical and psychological disabilities, it is essential to make use of psychological interventions that would reduce the level of individual stresses and increase the sense of selfefficacy and self-esteem. Interventions of this kind may also help patients control their symptoms and reduce the probability of relapse.
Despite the fact that a wide range of studies have examined physical and psychological disorders as well as psychological problems of MS patients, few studies have considered specific issues and concerns of MS patients based on the patients' real life experiences; this is the main aim of the present research. Since we aim to explore issues faced by MS patients, the methodology is qualitative and phenomenological (la Cour & Hvidt, 2010). Hence the present study aims to offer an exploratory insight into MS patients' thoughts, experiences, and attitudes regarding weaknesses, disabilities, and other psychological experiences they may have in the process of the lived experience of their illness. In this study, we endeavored to explore patients' experience of MS within the context of phenomenology, with special attention to how these participants experienced and described problems, concerns, and worries.
Method
We classify this study as a descriptive phenomenological qualitative approach, which focuses on the in-depth study of specific instances of a phenomenon (Gall, Borg, & Gall, 1996). We use Coliazzi's (1978) framework in the present study.
Concerning the suitable qualitative research approach for exploring the subject matter of the current study, it should be noted that the focus of the research topic is on understanding a phenomenon from the point of view of those who have experienced it (Smith & Osborne, 2015). "Lived experiences" is one of the constructs in phenomenological research (Vagle, 2018). Lived experiences described by participants are used to define the universal structures (i.e.,, essences) of the phenomenon (De Chesnay, 2014). The important thing is to describe the shared notion linked to a particular phenomenon by all participants in the study. To put is straight, this method involves dealing with a phenomenon -catching the MS illness in case of the current study. Next, we collect from the people who have experienced this phenomenon and a mixed description is provided from the common essence of their experiences of the phenomenon. This description encompasses both "what" they've experienced and "how" they've experienced it (Creswell, 2007).
We collected the data through in-depth interviews to conduct more detailed and accurate examination on experiences and perceptions of the participants. The authors transcribed data verbatim and analyzed using Colaizzi's method (Coliazzi, 1978;Vagel, 2018). Eligible participants included all MS patients having brain plaques-with or without spinal plaques (as determined by their physician). We did not apply any limitation for inclusion of participants regarding their age, age of onset, illness duration, sex, and EDSS degree. Our exclusion criteria included presence of spinal cord plaques alone, history of psychotic disorders or hospitalization in a mental hospital, and other neurological disorders accompanied by MS illness.
Procedures in Participant Selection and Data Collection
The study received ethical approval by the Department of Psychology of Shahid Beheshti University in Tehran. Furthermore, we informally took the informed consent about the aims of the study from the participant before data collection. Also, to enhance participant safety, we used the specific code for each one to maintain anonymity.
We recruited the participants in two ways: announcements in a number of neurologists' offices and invitation letters in social networks for MS patients. We followed them by a phone call to assess patients' interest in taking part in the study.
Within a two-month schedule, 42 patients agreed to participate. In the initial screening interview, 17 patients were excluded from the sample for the following reasons: seven patients for having only spinal plaques; four patients for having a co-morbid psychiatric disorder and six patients due to inability to attend interviews. A final sample of 25 patients with MS were selected for participation. Since a number of participants did not live in the Tehran area, the time of their interviews was set based on the time they could come to Tehran, whether for attending a medical appointment or any other reasons. The data saturation, as the criterion for sample size estimation (Saunders et al., 2018;Marshall, Cardon, Poddar, & Fontenot, 2013), had been reached after interviewing 12 participants, we continued the interviews with three additional participants to make sure saturation point had been reached. The level of data saturation was determined by the interviewer as well as an independent researcher in a process carried out in parallel with data collection. Finally, based on purposive sampling and after data saturation, we determined the sample size in this study as 15.
Accordingly, we analyzed the content of interviews with 15 MS patients in the present study.
Two/three 30-45 minutes weekly interviews were held for each participant based on the severity and variety of their problem, physiological ability, and mental readiness. The sessions included psychiatric, personal history, and in-depth interviews. Afterwards we transcribed, reread, and analyzed the recorded interviews.
Instruments
In-depth interview. We collected the data through an in-depth, semi-structured, faceto-face interview. "The in-depth interview is a qualitative research technique that involves conducting intensive individual interviews with a small number of respondents to explore their perspectives on a particular idea, program, or situation" (Boyce & Neale, 2006, p. 3). We followed the three phenomenological interview series (Bevan, 2014;Seidman, 2006): understanding of their experience, and for clarifying their meaning of their lived experience of phenomenon we asked them to reflect on the meaning of their experience.
Structured clinical interview.
As psychiatric disorders were assigned as exclusion criteria in this research, a semi structured clinical interview to assess and diagnose mental disorders based on diagnosis and statistical manual of mental disorder -fifth edition (DSM V) was also carried out for patients suspected to psychiatric disorders.
Personal and family history interview. This brief interview was conducted before the in-depth interview to better understand the context of the participant's family and personal life to deeper search in context which the disease has developed on. This interview was conducted with open-ended questions focusing on four domains: early childhood and family background, teenage years, adulthood, and self-evaluation overview.
Data Analysis
Colaizzi's method was used for coding and analyzing the data, respectively. The Colaizzi method is a seven-step process. In the first phase, the recorded interviews of the participants are listened to repeatedly, and their statements are put down on paper word by word following the end of each interview and compilation of field notes. The transcripts are studied several times to grasp participants' feelings and experiences. In the second step, when all descriptions, provided by the interviewees of the topic, are studied, the significant information and statements, related to the phenomenon under discussion, are underlined to draw attention to the important lines. The third phase involves induction of the formulated concepts; that is, subsequent to highlighting the important statements in each interview, the researcher try to conclude a concept indicating the scene and basis of the individual's thought.
Having extracted the codes, in the fourth step, the researcher starts studying the developed concepts carefully and makes classification through comparing the concepts. Subsequently, thematic categories are built upon the developed concepts. Next, in the fifth phase, the results are put together to come to a comprehensive description of the phenomenon under study and make broader categorizes. During the sixth step, a comprehensive description (as clear and unambiguous as possible) is provided for the phenomenon. The final step is validation that is done through coming back to each sample and questioning on the founding. Important lines relative to the phenomenon, under study, were highlighted in the text. Marginalia is suggested by Creswell (2007) in this stage to preserve the immediate ideas and perceptions. Therefore, the primary concepts were saved through making marginalia.
Then significant statements and phrases were made out and noted down on a separate sheet. Formulated meaning was derived from the aggregation of these significant statements. Specific codes were assigned to each one of formulated meanings (This is illustrated in Table 1) and related codes were classified into clusters that were put into corresponding themes and, finally, main themes and the sub-themes were determined.
Next, more themes were extracted from the formulated meanings, which were more qualitative and analytical. To test the credibility of these analyses, data was sent to two Ph.D. students familiar with qualitative analysis methods to review and check the codes, themes, and sub-themes. Ultimately, the validity of the obtained themes was examined and approved by all participants.
Results
Derived from qualitative analysis of interviews, our results indicate that the concerns, worries, and psychological issues of patients with MS can be divided up into six main concepts and 34 categories. The main concepts follow and are identified by numbers; associated categories, with excerpts are shown under each concept.
Existential concerns. Existential concerns refer to the issues which may arise in time of confrontation with givens of existence. Existential givens are certain and ultimate anxieties constituting an inevitable part of human existence in the world; the part that people are often trying to escape from. However, sometimes, as is the case with MS patients, avoiding or escaping from these "givens" may be difficult or even impossible. Hence, in this study, seven separate sub-themes were identified. Most of these were concerned with confrontation with existential givens: Confrontation with ontological whys and questions. From the beginning of the illness, all participants reported having to face up with questions such as "Why me?" "Why when I am still so young?" "Why was I ever born?," "Why did I get sick?," "What will happen to me at the end?" as well as other existential questions. Some of these questions were directed towards god or a deity as being responsible for their illness. Participants repeatedly asked God questions such as: "Why did you create me like you did?" and "Why did you do this to me?" The various strategies people used while seeking an answer to these sorts of questions can be categorized as follows: avoiding answering the questions, pondering on the questions, surrendering passively, conflict and aggression, self-blame and depression (self-reference), using sublimation mechanism for answering the questions, purposefulness and positive thinking and active surrender.
Will and responsibility. Two significant issues are suggested in this sub theme: first one is the issue of determinism and free will as far as MS is concerned (illness-induced factors, participant's personality before the illness, life stressors) and the other one is participant's responsibility as far as treatment is concerned. In general, people are of three different groups in this case while the majority of participants belong to the first group.
A. The participant views the illness and its treatment as totally deterministic. This group also expressed a low level of responsibility for treatment. For example, following are comments made by participant 1: "…after all, everything is deterministic, the illness…even taking drugs…recovery from the illness…if God wants, I will get well, there is no need for drugs and other stuff…if only one or two percent of life can be decided by your free will, it is ridiculous to plan for your future… whenever you intend to make God laugh, tell Him about your plans for the future…God says, make a plan, I will ruin it for you." B. The participant views getting ill as somehow deterministic but, thinks he / she has some control and they bear responsibility for treatment. As participant 8 described: " You must withstand the illness and do it with all your might . . . I did not want to cry… it did not have anything to do with me but, I controlled it…in this illness, you know better what you are doing to yourself…you are able to take over the reign…it is hard but possible…you must be tough" C. The participant considers both divine determinism and personal free will affecting the illness but, they think the major cause of the illness is associated with personal factors concerning themselves as well as their freewill. This group is also responsible for their improvement. For instance, as stated by participant 6: "I have only myself to blame for both catching the illness and staying disciplined to get well… I have never blamed God…I am so impressionable, Irritable, and blow a gasket quickly…I never give up easily… It is my fault getting sick" Give meaning to the illness and life. Following confrontation with ontological questions, the participant begins to make the illness and life meaningful. This attempt manifests as either optimism or pessimism. The illness can provide the participants' life with positive meanings by making their life purposeful, promoting their personality development, and empowering them to take advantage of their opportunities. Meanwhile the illness can also insert negative meanings into the participants' life by putting them on the track of gradual death, bitter experience, disability and paralysis.
Shift in purposefulness and orientation toward the course of life.
According to the results, purposefulness of the subjects before becoming ill is effective in their orientation toward life in the aftermath of being diagnosed. Before the illness, participants could be categorized put into two groups, namely those having a purpose and those being aimless.
Death as a boundary situation. What has been understood about death can be outlined under the following three headings: Contemplations surrounding death and life after death, Suicide and thoughts surrounding it, and Death Anxiety.
Loneliness. The presence of a sense of loneliness is an undisputable issue in the participants which is characterized by pain and sadness.
Love. This is described as having an emotional relationship with another individual (in the case of the present sample, the opposite sex). Here, the goal is to be in a peaceful and secure relationship rather than a romantic one. There are cases to be mentioned about the power of love as a lost key, able to heal the illness and increase the level of tolerance for pain inflicted by the illness, as expressed vividly by participant 12): "love can save you from despair…we married with love, but our sexual affairs are not too many! Although I am not able to meet her needs, our romantic relationship goes very well… without love, it is impossible for me to put up with the illness." Similar words were mentioned by participants 5 and 3.
Crisis of facing up with the illness. Crisis occurs when several aspects of the participant's life may be transformed by the process of the illness. Additionally, how an individual may be able to face the crisis is of great significance. Crisis consists of three sub themes.
First is confrontation with the illness. This involves how the participant is informed of his illness and what mood and mental reactions he makes in return. Regarding how the participants confront the illness, they are divided into three groups: A. Those who face up and become aware of the illness directly through an unplanned process by their doctor. B. Those who face up and become aware of the illness through an unplanned process by an unrelated person such as a friend or relative. C. Those who face up and become aware of the illness via a gradual process by immediate family members.
Generally, participants' mood reactions to the illness begin by not believing the story and is usually followed by astonishment and denial. They continue to deny the illness for a long time and continue to regard the diagnosis as incorrect. Within a maximum of a few days, their mood will turn to anger, deep sadness, and high emotional responses such as weeping, aggression, and even suicidal thoughts and actions. Aggression is a frequent response. Only those who confront the illness during a gradual process are likely to control their anger, and their emotional reactions are just limited to deep sadness, temporary social isolation, and lamentation. For example, as quoted by participant 8: "I became so sad and avoided talking to anybody for about two to three days. I preferred to be alone back then. My disabilities especially made me sick... but then, I told myself that I must be strong…"
Rise of dysfunctional thoughts and beliefs in confronting the illness.
A range of thoughts and beliefs are immediately activated after the illness is diagnosed. Generally, several kinds of beliefs were activated in participants and were as follows: Activated beliefs about the cause of illness and how it is related to God; activated beliefs about death and end of life and activated beliefs about MS itself.
Suffering from the illness. The patient views the illness as constant, chronic, irrefutable, and inevitable suffering. Factors causing the patients' suffering are categorized into nine groups.
Disabling, unpredictable, inevitable, and uncontrollable symptoms (fatigue as a symptom casting a shadow on the entire life), most important of which include: having difficulty in walking, standing up and, consequently, tumbling down, as well as experiencing gradual loss of abilities. As stated by participant 1: "little by little, the illness makes you believe that non-existence is better than existence" The illness convinces some participants to give up and just wait for death to come. Furthermore, the sudden onset and unpredictability of symptoms of the illness usually catch the patients off guard, leaving them in constant fear of new disabling signs and symptoms. While some participants are in grave pain due to the condition of illness, and fear the prospect of being plagued by the illness in the future, others feel that they must cope with their disability to shoulder the responsibility they have toward their children and spouse.
Concerns over future. Waiting for an unknown, imminent, and catastrophic future; fear of getting married, and the future of the relationship, and worries about prospects.
Frustration in life. The major factors contributing to participants' frustration in life include obstacles created by the illness so that achieving goals become difficult. These obstacles, when coupled with the early onset of the illness, would multiply the sense of frustration.
Life difficulties and limitations. Depending on the level of progress and the extent of disabilities experienced, the illness will impose limitations on the participants and may result in hardship and discomfort in life. Social limitations, disability that prevents taking part in preferred social activities, difficulty in moving around, and career limitations are among the major concerns of the participants.
Illness as a stigma and people's judgment.
People's judgment is one of the reasons leading to self-imposed isolation from the community.
The illness stigma is what makes us extremely upset. It really hurts us… I accept the fact that I must go along with all these limitations but, it is what people usually cannot swallow. They think MS is a horrible illness. As soon as they hear that I have MS , they think it is a terrible disaster! And what a miserable person I am…this MS illness stigma (i.e., the image of a disabled, a miserable, and a wheelchair user is really offending). (Participant 5) Annoying clinical representation. Appearance of symptoms of illness is considered as another suffering caused by the illness, whether it is a sign of illness such as walking or moving with difficulty or a side effect of taking drugs such as hair loss.
Never ending restlessness. This is another suffering inflicted in the aftermath of illness. Participants 15 and 13 refer to it as "always feeling bad," "which never lets go." Side effects of drugs. Various drugs have various side effects. These side effects seem to occur in one way or another in most participants, as reported by participant 4: "I never inject drugs anymore, it is horrific. It gives me 48 hours of raging fever." Economic problems. Despite the fact that participants in the present study belonged to different social classes, they all expressed worries and unhappiness about high costs of treatment in one way or another.
Relationship. This refers to the quality and nature of the individual's significant relationships following the diagnosis of MS and the effects it brings about on the individual.
This theme consists of seven sub themes, which are as follows.
Parents and primary family. This includes support received from the family as well as interfamily conflicts. The interfamily conflicts include conflicts between parents, disturbed relationship between parents and other members, and disturbed relationship between other family members.
Marital relationship and sexual affairs. The marital relationship is influenced by the illness at two different levels: affective and sexual. Following the diagnosis of MS, the marital relationship is affected differently; it may continue, break down, or even lead to remarriage of the unaffected spouse. The major issue reported in the aftermath of illness is related to sexual affairs, which may have harmful effects on the emotional relationship between couples. On the other hand, all single participants seem to worry about the issue of marriage. It is due to their sense of disability to make a family and continue their life naturally as well as to have an intimate relationship with a person as their spouse. Sexual problems as reported by some participants, include worry about the ability to have a healthy sexual relationship and meeting such demands by their spouse in their marital relationship.
Children. In this section, children's issues and childbearing arise as two different topics. In regard to those participants having children, the presence of children is seen as a motivation and stimulus for improvement. Meanwhile, fertility, transmitting the illness to children through genes, unpredictability of the process of illness as well as anxiety over the progression of illness during pregnancy are among the most important matters of concern for both single and married participants.
Friends, relatives, and colleague.
In most cases, the participants tried to hide their illness from others. It may be owing to their intense irritation at people's wrong judgments or other social outcomes of their illness such as losing their job. The subjects usually report that they lose contact with their old friends and try their best to find new friends under new circumstances.
Peers (other patients with MS). Some participants tried to make new contacts with other patients with MS after being diagnosed. Some found it positive and soothing to make new friends with the same problem while others were reluctant to make such a contact or once contact was made, their relationships deteriorated or broke down after a while. Participant 2 considers having friends with the same problem as a factor helping him to improve. Similarity participant 13 said: We can understand each other better; other people can't understand what we talk about; when I laugh with other people, I think they don't experience the pain I've suffered, but when I laugh with someone who is like me, I think he is suffering from the same pain and is still laughing, then I begin to laugh heartily too. What am I supposed to see? To see what is going to happen to me within the next 30 minutes or 20 years!? What am I going to learn by hanging with a group of people sitting on a wheelchair or walking with the aid of a walker? Do you think I thank God because I'm not like them!? My dark days will also come soon, I've only had the illness for years, I should deal with it to the end of my life, maybe I am forced to use a walker tomorrow, nothing is certain.
Self. Most participants reported loneliness and took a plunge into their inner self as they avoided making any sort of relationship. Indeed, their avoidance was seemingly a sign of significant conflicts they were experiencing.
Confrontation with spirituality and religion. Varying levels of a new different spiritual and religious experience, from positive to negative, and from a basic superficial change to a deep broad one bringing the entire life under its effects, had been undergone by most participants (14 cases). This theme involved five sub themes, which are as follows.
Definitions of spirituality and religion. Definitions provided by participants for the concepts of spirituality and religion were very similar. They saw religion as the factor contributing to spirituality and a religious person as being spiritual.
Performing spiritual and religious practices. The level of spiritual and religious activities was different in subjects and depended on their religious beliefs and attitudes, type of relationship with an attitude towards God as well as their religious identity.
The place of God in life.
Results indicate that there are three influential factors in spiritual and religious orientation of individuals. The kind of image participants have of God; the reasons they have for having a relationship with God; and the strategies for relationship with God. This relationship was divided into five main categories, namely the relationship accompanied with a sense of deep fear and conflict (with or without anger), the relationship accompanied with a sense of guilt (with or without anger), the relationship based on hope and seeking support, the relationship based on love and affection, and finally an attempt to pay less attention to and turning away from God. Participant 1 said: "New experience from a new God, a God you can talk with, and has everything under his control! And I'm a puppet in his hands. His very presence, makes you more spiritual." Participant 15 also describes his feelings as: I feel like a bee trapped in a beehive ! The bee thinks he's free to fly round but the truth is that he could make a short turn around himself in the hive , only able to make a wish; if God is not allowed, he could not even breathe in the hive; now I'm feeling like this, I can't do anything, but it doesn't mean that I don't want to, it means that the God doesn't want.
Religious strategy. Given the same image, the participants have about religion and spirituality, the religious attitude results in developing unique strategies to deal with the illness. These methods and strategies could be divided into two groups of positive and negative strategies based on the significance of God in the participant's life. The positive group relied on optimism and hope in strengthening spiritual power and the negative group relied on emotions like fear of divine retribution and feelings of guilt.
Change in attitude toward religion. This type of change occurred differently in various people and could be classified into two groups in general: taking a perspective centered on God and monotheism or holding animosity toward religion and spirituality. Most participants could be grouped in the former as they tried to adopt a viewpoint that further developed their spiritual strength.
Searching for tranquility. Serenity is like a lost precious haven for the participants, looking after it following the end of the distressed periods resulted from the confrontation with the illness. The following groupings could be extracted from the content of the interviews: Pacifism. Achieving tranquility is a goal for some, while a need and desire for the others. Achieving tranquility is considered as the ultimate goal for some participants as it encompasses other goals of life and mobilizes everything for its realization. It even makes people ignore other things in their life to only focus on and try to achieve tranquility. Tranquility is considered a need and desire for some participants. Albeit the fact that it is hard, and sometimes even impossible to achieve. Talking about his neve-ending restlessness, participant 9 reported the following: I am so confused and exhausted. I have no peace, no quietness. I cannot be pacified by anything. What used to be my favorite yesterday, only make me more frustrated today . I feel sick. Even when I am in good physical health, I feel sick…My father says your problem is due to the lack of peace and tranquility within yourself.
Solutions to find peace. The patients attempted to achieve tranquility via four main strategies: Finding tranquility through spirituality; deriving tranquility from entertainment; use of behavioral solutions such as superficial pleasure; deriving erive tranquility from believing that they can control their symptoms.
Sources of peace. In order to achieve tranquility, participants asked multiple sources for help, from God and their friends and family members, to psychologist and practitioner. Participants reported that they acknowledged their relationship with God and love of a faithful and devoted spouse as healing factors leading to tranquility.
Discussion
Given the chronic, progressive, and degenerative nature of multiple sclerosis, from the early phases of the illness, patients are face a set of difficulties, concerns, and worries that burden them with significant pressure and stress. The results of the present study showed that these difficulties, concerns, and worries of the patients could be classified into seven main themes. Consistent with the work of Pakniya et al. (2015) on the effectiveness of cognitiveexistential therapy on the quality of life of MS patients, also consistent with Nasiry Zarrin Ghabaee et al. (2016), the results of present paper confirm that the major mental concerns and distresses of MS patients are existential concerns and conflicts. An illness such as MS may be considered to be threatening not only to life but to the very meaning of existence itself. According to the analyses presented in this study, the primary concern of MS patients immediately after being diagnosed is confronting existence-related and ontological questions. These sorts of questions directly address the issues of existence and sources of existence. Olsson et al. (2008) briefly point to the presence of these questions among MS patients. Accordingly, they suggest that having ontological discussions as well as taking meaningful and purposeful attitudes toward life when trying to answer existential questions would help patients achieve self-authenticity in the course of the illness, try to resolve (albeit superficially at times) certain existential conflicts, plan for the future, and develop meaning in life which may lead patients towards self-integrity.
It seems that the all-important question of Determinism versus Free Will becomes a major concern for MS patients when trying to investigate the causes of their illness. In addition to this issue, the level of responsibility felt by the patient for his or her illness is determined by decisions and choices made when trying to respond to this question. Lingering in the crisis to decide how much responsibility the patient may have had towards the illness (referred to as crisis of will by Rollo May), the patient may suffer from the illness when he is burdened with grave responsibility of living with the illness, undergoing treatment, living in pain and, while struggling to make sense of his life in the time that he has at his disposal. It is under such circumstance that the patient may show wide variety of reactions and responses. As Rollo May has stated: "instead of engaging specific mechanisms in body, the illness casts a shadow on the whole self" (2007, p. 39). Indeed, the "self" plunges into confusion that strips it of the power of will and choice. The crisis of will in this group of patients, is the result of a conflict between existence and non-existence of power in the personal world. Rollo May described: "choosing yourself means choosing life, between life and death and, the one who does so will take responsibility of his life" (2009, pp. 195-196). It is after accepting the responsibility for treatment and living on with illness that one can create new meanings in new situations and stay faithful to them.
Our results show that the illness is far more assigned with negative meanings than positive ones. However, there are participants who find reasons to continue their life and to attempt to recover, which help them lead a purposeful life. Pakniya et al.'s (2015) findings also seem to reiterate this fact that the creation of meaning in life by these patients would be fairly effective in the improvement of their psychological welfare. So that such a person, as quoted by Frankl (2000, pp. 120-125), "is capable of not only deciding on how to live but how to die." Adamson (1997) suggests that existential uncertainty is one of anxiety-provoking factors in patients with chronic illness. Kierkegaard (1813/1855, cited by Adamson, 1997) believes that death and its unpredictability is a place wherein the person is confronted with existential uncertainty. Consistent with this study's findings, Adamson (1997) notes that existential uncertainty along with uncertainty and unpredictability of the illness, in patients plagued by chronic illness, would keep them more alert and more aware of the existential concerns such as death. Accordingly, they are exposed to higher levels of both overt and covert anxiety. In this regard, suicide and suicidal thoughts are observed almost in all participants, though it seldom leads to committing suicide. Pompili et al. (2012) also recognized that a chronic illness such as MS, will increase the risk of committing suicide. According to the findings of the present study, meaninglessness and absurdity of life as well as self-disintegration are two key factors that may give rise to suicidal thoughts and committing suicide. A study reported by Pakniya et al. (2015) also endorses the idea that existential isolation, that is, the fundamental loneliness in this world, is commonplace in suicidal MS patients.
The essence of loneliness of participants is made up of an array of interpersonal, intrapersonal, and existential loneliness; however, it is the experience of intrapersonal loneliness that is generally credited with the pain inflicted by loneliness. Participants usually fail to connect to those parts of "oneself" that are either unknown or unacceptable to them. This is caused by a disintegrated "self" that shows up in the form of annoying thoughts and, either rational or irrational, fears and worries. Running away from the loneliness they feel towards community may result in slipping into false love and attachment, and that may be one reason for why dysfunctional affective relationships are widely observed among this group. It is also suggested that in preparing the patient to confront the illness, the presence of social and family network and relationships, which have concerns for the patient's illness, is very important. Social and family support can improve the patient's ability to keep calm and selfconfidence when MS is diagnosed. The patient's capacity to cope with the illness and belief in him or herself may also improve. Consequently, the patient may accept the illness, accept the responsibility for treatment, become more active in life and gain "self" integrity (Bulloch et al., 2015).
There are a wide variety of factors involved when considering the process of suffering as a result of illness. These factors which may be socio-cultural in nature and may affect the level at which suffering is inflicted as well as how it is prolonged (Adamson, 1997). To mention one of the social factors, we could refer to the strategies employed by medical staff and/or family related to how the patient be informed of the diagnosis and prepare to confront their illness. Patients are suffering from an immediate and unprepared confrontation with seriously disabling symptoms, which are uncontrollable and there is no clear way to treat them. Moreover, predicting the inevitably of forthcoming symptoms could be anxiety provoking. Olsson et al. (2008) recognize the unpredictability of later symptoms, dubbed as "unpredictably unknown body," as one of the gravest concerns of all participants.
The prediction of a catastrophic feature along with a sense of inability to control the symptoms, fear of marriage and the future of relationships, and worry over job prospects, taken as a whole, add to the suffering of the illness. As confirmed by the present study, family and professional relationships are significant territories heavily influenced by the illness (Papuc & Stelmasiak, 2012). Additionally, the sense of inability to shoulder the responsibilities placed on the subject, such as family and children, education and profession, is taken into account as an important stressing factor (De Judicibus & McCabe, 2005;Cook et al., 2013). How people judge is another factor which may contribute to patients' suffering, which could affect marital and other emotional relationships and inflict direct damage to the patients' self-concept. Since, in the aftermath of the illness, the patient tends to hide the illness, (possibly because of cultural taboos and issues) , the rise of clinical symptoms may cause difficulties for the patients when faced with negative evaluation and judgment from others. As a result of this fear of negative evaluation, MS patients may experience lower self-esteem and self-imposed avoidance of social situations and interactions, as found in the present study. In the present study, patients described themselves as being frustrated with life and wistfully longing for their dreams to come true. These sorts of longing are not caused by the distant impossible dreams but, as described by some participants, are due to disability to do simple, basic, and day-to-day household tasks like using a napkin and cleaning the home. As mentioned by Olsson et al. (2008) one of problems of these patients, and women in particular, is that they are dependent on others even for simple everyday tasks, which they are longing to do by themselves. As stressed by Backus (2016) andNasiry Zarrin Ghabaee et al. (2016), the ability to do routine activities has a significant effect on participants' self-concept. Indeed, some findings (e.g., Bakus, 2016) indicate that increase in self-concept pushes the subject to do more of the activities of this sort even with fatigue and disability. Backus (2016) believes that MS patients will feel healthier and more self-confident if they can manage to do daily activities.
At the same level of disability, the degree of impressionability of participants is related to their psychological capacity and structure as well as their "self" integrity before the illness. It is heavily influenced by family status and caretakers and the quality of their lives. At the same time, any shifts or rise and fall in the quality of life is influential in the behavior of patients (Asmahan, Alshubaili, Ohaeri, Awadalla, & Mabrouk, 2008;Giordano et al., 2016). Family disintegration and serious conflicts are widely observed among the families of participants. These results are confirmed by a collection of studies in which the role of family-induced stress factors are highlighted in developing serious difficulties and illnesses in children (Frankel, Umemura, Jacobvitz, & Hazen, 2015;Stover et al., 2016). As children need to be offered love and security by their parents, any conflict between them and damage to their relationship would lead to permanent nervous tensions with which the children have to be confronted throughout their life. Furthermore, experiencing a strained atmosphere between family members is effective in that it increases relapse of and the risk of attacks in MS patients.
Building on the results of the current study, it is revealed that the marital relationship is profoundly affected by MS. This should not be interpreted as suggesting that a healthy relationship may necessarily be forced into dire conflict and disarray, but rather it is suggested that the diagnosis of MS may result in the existing characteristics of the relationship becoming more salient. In other words, existing problems may be reiterated. However, in the case of very strong relationships, closeness between partners may occur as they rise to meet the challenge of MS. Popp, Ann, Robinson, Britner, andBlank (2014) andMessmer Uccelli et al. (2013) showed that unhealthy marital relationships before the illness will continue after the illness but, the illness rarely ruins healthy marital relationships. Additionally, the presence of love and affection in the marital relationship could bring meaning to the lives of these patients. Ivtzan, Lomas, Hefferon, & Worth (2015) also point out that in the case of chronic and disabling illness, embracing 'the dark side of life' (suffering, hardship, and challenges), may result in personal growth and development. In the present study, most participants believed that receiving emotional support from the family has a positive impact on their health improvement and life expectancy, A study by Hughes, Locock, and Ziebland (2013) confirms the results obtained from the present paper that emotional support the patients receive from their partner and children is so important that it could play an effective role in mitigating the symptoms of the illness, as well as boosting self-confidence and cultivating self-acceptance.
A similar argument can be put forward regarding sexual relationships as an effective factor in marital relations. As suggested by Sevène et al. (2009), in many cases involving patients suffering from chronic illnesses, couples report sexual dysfunction and mention this as the main factor contributing to the relationship's break up. Major causes of sexual problems in MS inculde psychosocial and cultural challenges such as low self-confidence, poor morale, difficulties in interpersonal skills and initiating and maintaining relationships (Prévinaire, Lecourt, Soler, & Denys, 2014). Consistant with Cordeau and Courtois's (2014) study, participants in the present study reports of dysfunction in sexual performance and desire, especially in female patients was very common. We believe that physiological effects of the illness and side effects of drugs may be the main reason for this. However, as confirmed by other research, (e.g., Cordeau & Courtois, 2014;Samios et al., 2015;Sevène et al., 2009) the authors of the present study believe that there may be a strong interactive effect between patients' problems with emotional and sexual relationships and social and cultural factors such as isolation and fear of negative evaluation. In other words, the MS patient's relationships are limited partly because of other people's judgments and social consequences of the illness. The patient would prefer to stay away from the crowd rather than to be challenged with new problems. Isolation from the others is in part interpreted as accepting loneliness. However, due to fear of loneliness and the inability to enjoy a rich inner world, the patient experiences anxiety. This interaction needs to be investigated by future researchers to establish both the mechanism through which social and cultural factors interact with physiological and psychological factors to create feelings of loneliness and anxiety.
Here, the only distinct relationship identified is the one with God. Most participants reported that they did in fact maintain a relationship with God, whether positive or negative. Though this relationship is sometimes strained and conflicting, participants in the present study seem to agree that the need to make such a relationship (i.e.,, the need to connect to a higher or ultimate source of power) is necessary. Results of the present study, which is in keeping with that of Nasiry Zarrin Ghabaee et al. (2016), indicate that the type of spiritual and religious relations, and the relationship with God in particular, is highly effective in improving the quality of life of MS patients. The present study confirms results obtained by Tofiqi, Azizzadeh Forouzi, Tirgari, and Iranmanesh (2014) according to which the issues of spirituality, spiritual experience, and religious attitude are specifically highlighted by the participants in the aftermath of illness. The issues of religion, God and spirituality, and God's role in the occurrence, process, and the fate of the illness are inevitably facing the patients in times of crisis. The results of the present study are also confirmed by results obtained by Michaelson et al. (2016), Zimmer et al. (2016), Rosmarin et al. (2013), that the place of God in participants' life and their faith in God are crucial when they confront illness and try to cope it. As suggested by Papuc and Stelmasiak (2012), though the effect of religion on the quality of life of MS patients has received little attention, those patients who invite spirituality into their lives and choose spiritual coping strategies, would enjoy life with a higher quality. Meanwhile, according to findings of Nowaczyk and Cierpiałkowska (2016), having spiritual resources in particular could not only make up for the loss of other sources of support but also help these patients to be able to further exploit their potentials. Developing a spiritual relationship is considered as a haven to achieve tranquility, security and inner peace, which are viewed as major concerns of MS patients.
Finally, it could be asserted that all the above-mentioned categories are interconnected and interrelated, a constellation, as it was, affecting and being affected by one another. Meanwhile the search for meaning and the crises created by existential questions, along with possible changes made in the state of the "self" are regarded as the particularly significant factors in this constellation. All the mentioned factors affect the self-concept of participants directly. If the influence is positive, self-concept will be strengthened. Otherwise, the patient may suffer from a weakened self-concept. Similarly, we found that most participants suffered from a lack of "self" integrity. Likewise, participants tend to move towards the negative side of this spectrum (i.e., self-disintegration) rather than the positive side (i.e., self-integration).
Multiple Sclerosis is an auto-immune illness in which the body's immune system cannot recognize body cells from foreign substances. Results of the present study poses an important question: How do behavioral and psychological signs and symptoms of MS, mirror what happens at the cellular level? What is the mechanism through which damage at the cellular level may result in self disintegration? Accordingly, it is suggested that future studies should focus on psycho-neurological processes of MS.
Finally, based on the results of the present study, it seems that MS patients may benefit a great deal from suitable psychological interventions. It is suggested that these possible intervention programs should concentrate on concepts such as self-consistency and selfintegration of patients. It may also be argued that symptom-oriented treatments are only capable of improving some problems these patients are facing in their everyday life. The present study suggests that "self-focused" intervention approaches could deliver higher stable efficiency.
Conclusion
The present study attempted to investigate Iranian MS patients' worries and concern by analyzing the data obtained through in-depth interviews. We conclude that MS patients authentic worries are rooted in ontological issues related to existence, existential causes and anxieties. These worries contribute to a set of concerns across different aspects of life. The major concerns of these patients are over job prospects, education, and relationships with significant others as well as worrying over the unpredictability of the illness. The latter is of particular importance since it seems to influence the patient's life as a whole and resulting in low self-concept, causing self-disintegration and loss of meaning in life.
Practical Implications
The results of the present study provide valuable information about how MS patients experience their illness and their main worries and concern. This is not only useful to patients themselves but also to their families and carers as well as doctors, medical staff, psychologists, nurses and social workers working with MS patients. The results reiterate the need for better and more detailed insight into MS patients' needs and psychological problems. Psychotherapists in particular could try and formulate their approach on a more existential perspective in order to help MS patients live a fuller life and overcome some of the problems they face as result of living with this illness. The implications of our findings are also important since they highlight the need to concentrate on patients' families and significant others, as relationship with others was a major concern of the patients in our study. It is suggested that this could be done with three main aims in mind: (1) Empowering and educating families about how to manage the likely domestic conflicts to minimize the stress; (2) Managing crisis and stress experienced by those looking after the patient; and (3) Providing education about the illness, medical procedures, and interaction with the patient. Furthermore, as the patients, in the course of their lives, face new, unknown, and unpredictable symptoms that could affect their quality of life, it is suggested that MS patients should remain in controlled programs whereby they may be constantly monitored and followed up.
|
2020-09-07T06:06:38.015Z
|
2020-08-15T00:00:00.000
|
{
"year": 2020,
"sha1": "35f0885037f184760d4c5888865b144e9fd82734",
"oa_license": "CCBYNCSA",
"oa_url": "https://nsuworks.nova.edu/cgi/viewcontent.cgi?article=3630&context=tqr",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "35f0885037f184760d4c5888865b144e9fd82734",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
231678316
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between exacerbated diabetic peripheral neuropathy and metformin treatment in type 2 diabetes mellitus
Metformin-treated diabetics (MTD) showed a decrease in cobalamin, a rise in homocysteine, and methylmalonic acid, leading to accentuated diabetic peripheral neuropathy (DPN). This study aimed to determine whether or not metformin is a risk factor for DPN. We compared MTD to non-metformin-treated diabetics (NMTD) clinically using the Toronto Clinical Scoring System (TCSS), laboratory (methylmalonic acid, cobalamin, and homocysteine), and electrophysiological studies. Median homocysteine and methylmalonic acid levels in MTD vs. NMTD were 15.3 vs. 9.6 µmol/l; P < 0.001 and 0.25 vs. 0.13 µmol/l; P = 0.02, respectively with high statistical significance in MTD. There was a significantly lower plasma level of cobalamin in MTD than NMTD. Spearman’s correlation showed a significant negative correlation between cobalamin and increased dose of metformin and a significant positive correlation between TCSS and increased dose of metformin. Logistic regression analysis showed that MTD had significantly longer metformin use duration, higher metformin dose > 2 g, higher TCSS, lower plasma cobalamin, and significant higher homocysteine. Diabetics treated with metformin for prolonged duration and higher doses were associated with lower cobalamin and more severe DPN.
www.nature.com/scientificreports/ not generally distinguish early changes of neuronal damage and some DPN patients give negative results in the clinical assessment 23 . Our study intends to distinguish the serum level of cobalamin, methylmalonic acid and homocysteine changes in Metformin-treated diabetics (MTD) and to evaluate the connection between the severity of DPN and the long-term utilization of metformin.
Methods
Participants and inclusion criteria. 150 adult patients with Type 2 DM (T2DM) were included according to Standards of Medical Care in Diabetes Guidelines 24 . We prospectively identified patients diagnosed with T2DM who were treated with metformin for more than 6 months. Also, we select non-metformin treated age and sex matched with metformin treated group with the same number of cases. The study design was a casecontrol, prospective, analytical, observational study. Cases were T2DM patients on metformin (75 participants). Controls were T2DM patients not taking metformin (75 participants). The present prospective study was conducted during the period from the first of May 2017 to the end of September 2018 in the neurology outpatient clinic, Mansoura University Hospital, Egypt. The included patients were type two diabetics on oral hypoglycemic therapy with clinical proof for DPN. Patients were classified into 2 groups; Metformin treated (group I): included 75 patients administered metformin for the previous 6 months or more and non-metformin treated (group II): included 75 patients who weren't administered metformin for the previous 6 months (but administered other oral hypoglycemic drugs). The duration of DPN was determined by the history of onset of symptoms or the previous electrophysiological studies done.
Sample size determination. The formula for estimation of sample size is: where n is the needed sample size, Z is the critical value at 95% confidence level (1.96), p is the prevalence of vitamin B12 deficiency in cases with type 2 DM on metformin (9.7%) obtained from the study of de Groot et al. 25 . e is the error margin that the researcher was willing to accept, and in this condition, was equal to 0.05. Note that (1-p) = q, which was the proportion of the sample population not covered by the study.
Exclusion criteria. The patients excluded from this study possessed the following characteristics: type 1 diabetes mellitus, impaired glucose tolerance, peripheral neuropathy because of different causes other than diabetes or cobalamin insufficiency and patient cease treatment by metformin or administrated metformin for less than 6 months. Clinical assessment. Complete history taking for symptoms of DPN, family history of DPN, medical history, duration of DM, and current medications for DM or other systemic diseases was taken. Also, complete general and neurological examinations were done. The clinical assessment of the severity of DPN was done by the Toronto Clinical Scoring System (TCSS) which comprises of three sections: symptom scores, reflex scores, and sensory test scores. The greatest score is 19 points: 0-5 if there is no diabetic peripheral neuropathy, 6-8 if there is mild diabetic peripheral neuropathy, 9-11 if there is moderate diabetic peripheral neuropathy, 12-19 if there is severe diabetic peripheral neuropathy 24,26 . Electrophysiological studies and laboratory investigations. Regarding the electrophysiological studies, a nerve conduction study was carried out for all patients by utilizing Xilec-Xcalibur from Natus neurology, USA, 2010. The selected nerves, according to clinical symptoms and findings were examined for motor and sensory conduction studies, the distal latency, the amplitude of action potential, motor and sensory conduction velocities were measured 25,27 . The laboratory investigations include complete blood count, electrolytes, renal and hepatic functions, thyroid-stimulating hormone, rheumatoid factor, serum folate, Hemoglobin A1c, and serum cobalamin level according to the classic method of radioimmunoassay. High-performance liquid chromatography used to measure the homocysteine levels and mass spectrometry used to measure methylmalonic acid levels for all patients. Statistical analysis. The Statistical Package for Social Sciences (SPSS) programming, version 21.0 was utilized for information registration, approval, and analysis. Frequency, tables, and diagrams were produced for the categorical variables. Tests of significance were created for various variables. The Chi-square test was utilized to test categorical variables, and independent t-tests were utilized to test the importance of the results of the two groups. The Mann-Whitney U test was utilized to contrast and to investigate the means of the independent variables. The correlation coefficient and Chi-squared tests were utilized to gauge the relation between two quantitative and qualitative variables. The degree of statistical significance was characterized by an estimation of P-value < 0.05. Spearman's correlation coefficient was done to analyze the relationship between continuous variables such as metformin dose and vitamin B12 levels and also between metformin dose and TCSS scores. www.nature.com/scientificreports/ Logistic regression analysis was carried out to identify independent risk factors for diabetic PN in metformintreated patients.
Patients demographic and clinical characteristics.
This study was carried out on 150 patients with type 2 diabetes mellitus. Group I included 75 patients administered metformin for the previous 6 months or more. Group II included 75 patients who weren't administered metformin for the previous 6 months (but administered other oral hypoglycemic drugs). The two groups were similar in age, sex. Also, there was no important difference between both groups regarding differences in the disease severity, the duration of diabetes, and duration of diabetic PN (P = 0.9 and 0.82 respectively (Table 1). MTD had a significantly higher moderate to severe DPN and higher total scores of TCSS (10 ± 7.5 vs. 5 ± 9.5, P < 0.001) indicated that metformin users had a more severe DPN (Table 1). Table 2 showed a lower median serum cobalamin in the metformin-treated diabetics with high statistical significance (222 vs. 471 pmol/l; P < 0.001). There were also significant differences between the two groups in median serum homocysteine and methylmalonic acid levels, and MTD showed higher levels with P < 0.05. HbA1c was higher in metformin-treated diabetics without statistical significance (P = 0.09). sural nerves in MTD showed significant slower conductivity with P < 0.05. Additionally, median sensory nerve action potentials (SNAP) studies for sural and superficial peroneal nerves in the MTD showed a significantly lower level with P < 0.05 (Table 3).
Laboratory findings for metformin-treated and non-metformin-treated groups.
Correlation of metformin dose to vitamin B12 and Cbl plasma levels. Spearman's correlation coefficient was used to examine the bivariate relationship between the two continuous variables of metformin dose and vitamin B12 levels and metformin dose and TCSS. Figure 1A showed a significant negative correlation between cobalamin plasma levels and higher doses of metformin (r = − 0.522 and P = 0.01). Figure 1B showed a significant positive correlation between TCSS and increased dose of metformin (r = 0.891 and P < 0.05).
Correlation of the severity of DPN with cobalamin, homocysteine, and methylmalonic acid (Fig. 2). There was a significant inverse relationship with the severity of DPN and cobalamine level (r = − 0.81 and P < 0.05), while the severity of DPN was directly related to higher levels of both methylmalonic acid, and homocysteine (r = 0.71 and P < 0.05; and r = 0.73 and P < 0.05 respectively). (Table 4).
Discussion
Diabetic patients have different degrees of nervous system damage ranging from mild to severe forms, the most widely recognized being peripheral neuropathy which affects 60% to 70% of cases 28 . The aim of this study was to assess whether metformin is a risk factor for the development of DPN by evaluating the impact of metformin Table 3. Comparison of nerve conduction study features for metformin-treated and non-metformin-treated groups. SNAP: sensory nerve action potential, MT: metformin-treated, NMT: non-metformin-treated, N: number.
Electrophysiological findings MT N (75) NMT N (75) P-value
Superficial peroneal nerve SNAP (amplitude µv) 3.2 ± 5.7 5.9 ± 6.9 < 0.05 The present single-center prospective study during a one and half year duration was carried out on 150 patients (80 males and 70 females) with DPN, during their attendance to the neurology outpatient clinic. The current study stated that the MTD had significant higher DPN (50.7%), total TCSS scores (10 ± 7.5), serum levels www.nature.com/scientificreports/ of homocysteine (15.3 µmol/l) and methylmalonic acid (0.25 µmol/l), lower cobalamin (222 pmol/l), median conduction velocity and SNAP for superficial peroneal, and sural nerves. Also, there was a significant negative correlation between cobalamin and metformin. On the other hand, there was a significant positive correlation between TCSS and metformin. The significant predictors' explanatory parameters associated with DPN occurrence in MTD were larger dose and longer duration of metformin usage, low cobalamin level, and high homocysteine level. Our results were in harmony with Khan et al. who concluded that patients with type 2 DM with prolonged duration of treatment with metformin were accompanied by a higher occurrence of vitamin B12 inadequacy 35 . So, clinicians should screen diabetic patients who are metformin users for any B12 deficiency accordingly. Our study exhibited a higher median serum level of Hcy and MMA. These variations from the normal values were highly related to metformin users. Our findings were as per that of Ahmed and Chapman et al., demonstrating that long-term metformin users compared to the non-metformin users had lower serum Cbl, higher MMA and Hcy 36,37 .
Our results demonstrate that metformin treatment conveys a potential hazard for the occurrence of cobalamin lack. However, this would most likely be little if cobalamin status is monitored consistently. Cobalamin inadequacy is increased among patients with type 2 diabetes mellitus. There is a requirement for a more precise differential conclusion to recognize diabetic neuropathy and metformin-induced neuropathy. We also found that patients with type 2 DM, DPN, and more than 6 months of treatment by metformin demonstrated higher scores on the TCSS, indicating more clinically severe DPN when contrasted with comparative to non-metformin users. These findings were frequently connected with cumulative metformin utilization with a positive correlation between DPN and cumulative metformin dose. These results were similar to that of Holay et al. who found the incidence of neuropathy by TCSS and NCV test was significantly higher in metformin users with a positive correlation with cumulative dose and duration of metformin and a negative correlation with serum vitamin B12 levels 38 . Similar to our results, Singh et al. discovered a smaller yet significant difference in neuropathy scores in between metformin and non-metformin users 39 .
Electrophysiologically, this study exhibited that the metformin users had more severe DPN. Measurements of the mean conduction velocity and mean amplitude of SNAP of the examined nerves were altogether significantly lower in the metformin users' group. These results are not quite the same as Wile and Toth who found an insignificant lower mean conduction velocity and mean amplitude of SNAP in the metformin users' group 40 . This may be explained by the difference in demographic characteristics and longer duration of the disease in our study. The nerves of lower limbs, especially sural and superficial peroneal nerves were more sensitive to diabetic effects. This may be also supported by the study of Shibata et al. and Agarwal et al. who found that the sural nerve conduction study had the most reliable test that correlates well with the severity of diabetic PN 41,42 .
In our study the severity of DPN was inversely correlated with the cobalamine level and directly correlated with higher levels of both methylmalonic acid, and homocysteine. Similarly, Wile and Toth found that the clinically worsened DPN was associated with lower cobalamine level and higher levels of both methylmalonic acid, and homocysteine 40 . It has been shown that elevated levels of total homocysteine significantly correlated with DPN, independent of other risk factors 43,44 . The serum MMA positively correlated with the severity of neuropathic pain and this can be used as a useful marker in assessment of peripheral neuropathy 45 .
The discoveries in this study recommended that a cumulative dose of metformin is accompanied with a decrease in levels of cobalamin and deterioration of DPN. However, metformin is the main therapy in type 2 DM Limitations and directions of future work. Finally, the present study had some limitations. The first limitation was the relatively small number of patients due to the limitation of resources and financial issues. Glycemic control not being assessed is also a limitation of this study. Also, this study, cannot evaluate long-term effects of risk factors concerning the development of DPN. Further studies should be conducted on a larger number of patients for a longer duration and to follow up patients with DPN for the improvement of manifestations especially after stoppage of metformin.
Conclusion
Patients with type 2 DM who were treated with metformin for prolonged duration and higher doses were associated with lower plasma cobalamin and more severe DPN. Patients on metformin treatment must monitor regularly for plasma levels of cobalamin, methylmalonic acid, and homocysteine.
Recommendation
• Routine screening of type 2 diabetic patients on metformin for vitamin B12 inadequacy is highly recommended due to its high prevalence and the significant clinical impacts that may result in DPN. • Besides, we suggest, beginning treating patients with B12 once a borderline or low level is recognized. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
2021-01-23T06:16:26.221Z
|
2021-01-21T00:00:00.000
|
{
"year": 2021,
"sha1": "076730bec1837d8efad73d0d38e5e3b84a104cfc",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-81631-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "962dc203645fccf098de4c5e79672afa806d538b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213943421
|
pes2o/s2orc
|
v3-fos-license
|
Analyzing Human Scale Space on Street Characteristics in The Tembalang Education Area
The education area is considered as a new growth center, because the demand for mobility is quite high, related to the ease of the community to the destination. The term human-scale space becomes important to achieve ideal conditions and without external interference when moving. The research location is in the Tembalang education area of Semarang City and examines the three main variables of the human scale space, namely accessibility, distance, and coordination between locations. Data obtained through field observations. The research method used is descriptive quantitative and scoring analyze. The purpose of this study was to analyze the human scale space based on street characteristics in the Tembalang education area. The method used in after observing, the results of this study classify the characteristics of streets in the education area Tembalang both on campus and outside of campus.
Introduction
The loss of human-scale space can affect the occurrence of criminal acts, which affect activities on a regular basis at a certain time [1]. According to Newman (1972), the spatial scale is a human model that forms individual physical expression. In this context, indicators of accessibility, distance, and the interrelationship between locations are important factors forming patterns of behavior in carrying out mobility. These three indicators are the main measures in an individual initiating movement in a location and at certain times or modes of transportation.
Accessibility as the ability of individuals to reach certain routes or locations through means of transportation in urban areas [2]. The function of the connecting facility is to facilitate the individual in reaching the activity point. The means of connecting are defined as the pedestrian, bridge, or supporting modes of transportation such as motorbikes, bicycles, or pedicabs that are different from the main modes of transportation at certain times and routes. The distance is defined as the actual measure of an object or measure when an individual moves in space [3]. Distance is also often associated with the ability of individuals to compare one object with another object [4]. Distances are related to the radius of an individual's achievement of the location or can also be related to how far the distance of individual mobility is. Meanwhile, the interrelationship between locations is defined as the unity of the system between activities and/or humans that creates a behavior [5]. The research context speaks of spatial integration in conducting mobility. The incompatibility of the three indicators triggers a negative behavior. So that the community physically has no control over the surrounding environment [6].
The existence of several universities such as Diponegoro University in Semarang City is the main attraction for students who are outside the city of Semarang. This increases the activity around the education area. The increase in activity in this education area has a positive and negative impact, the negative is such as the emergence of crime along the way and creating negative perception of the university area. A college should occupy a safe, comfortable and quiet area to study, but the Tembalang education area has something different from other education areas in Semarang because it is located in a highland area. As a result, it can increase the potential for crime in the university environment. Therefore, this study presents a predictive model of crime-prone areas with streetscape variables such as street-facing building entrances, presence of parking lots and fenced wall heights.
The study aims to analyze human scales based on street characteristics in the Tembalang education area, Semarang. Research related to the characteristics of streets in the education area is quite often done, but there are still few studies that link human scale. Thus, with the existence of this research, the author hopes to be able to provide information on whether the Tembalang City education area is safe or not as a result of the ownership of individual space which later can have an impact on the movement pattern.
Human Scale
The concept of space (physical space as a place of human behavior and communicating naturally) and scale (understanding the area from the smallest to the largest unit) has become common in urban planning, namely how to understand geographical objects spatially [7]. LeFebvre also emphasizes that space spatially and scale will continue to change, along with how humans use that space. On the other hand, Yi-Fu-Tuan [8] provides an argument that space does not have a certain scale limit, as long as humans who carry out activities in it feel that they have a sense of place.
Human-scale space is a measure of a particular object that is measured based on human visuality and the point of view of the object [3]. The phenomenon of human scale space is often difficult to see, because the human system directly forms a constant measure of the objects around it. So that people often think that the size of an object has something in common with him, and a different picture of the object is lost. Also emphasizes that human scale is not measured by the desires of an architect or engineer in making scale of objects, but how humans are able to receive and communicate with these objects as a visual form in a different perspective [3].
Space adjusts the scale of human needs related to distance, accessibility, and coordination between origin and destination. According to [9], in comparative research conducted in London (United Kingdom), there is an association between humans and the scale of space used for activities. This is a representation of the success of the formation of space in accordance with human needs.
Created an analysis of human scale space needs with different behaviors [10]. This depends on the purpose of mobility, the socio-demographic characteristics, the natural and built environment, and the quality of public transportation. Also not separated from certain subjects such as children, disabilities, and women (gender aspects). In addition, coordination between human space is measured from the environment that can create walking space, lighting, security, density, and regular mixed land use [11]. In conclusion, design must be able to connect and estimate the connectivity between locations, to build a climate of conducive mobility.
Accessibility
Accessibility in urban planning is defined as the ability of a city or region to facilitate the central needs of human activities which are influenced by various factors and the existence of certain regional functions [12] . The difficulty that is often faced by individuals is defeat in competition to achieve limited accessibility [13]. Even though accessibility is an important factor in shaping personal space in carrying out mobility. Accessibility can also provide security and comfort in supporting elements of urban vitality. Good accessibility is usually associated with ease of affordability to the destination location, availability of transportation modes to get to the location, convenience of physical objects that are used as a location link, and can easily create mobility activities related to improving environmental quality. The challenge often faced in accessibility planning is how to provide a balance between the use of transportation and human labor in reaching the destination [14] For example, in the context of the education area, it is a balance between the availability of pedestrian lines and modes to reach educational areas such as campuses. The function of this is to support the proper occupancy of a city, attracting public space at the same time relating to service activities and fulfilled individual needs. When urban space used by individuals is smaller or narrower, it will have a negative impact such as congestion (self-congestion) and disturbed security [15] . Individuals are more likely to be easy to do accessibility when having interests related to public needs.
Land Use
Land use is related to activities Coordination between locations is often associated with a form of activity, characteristics of an area or location that attract functions, uses, and other activities [16]. Integration and coordination of activity patterns is important, such as the argument from [17] which states that the coordination of a location concludes the human ease of reaching other locations. Critical issues related to location coordination are usually associated with: 1. Effect of congestion caused by the accumulation of activities in one location 2. Narrow space on the human scale for walking, dangerous pedestrian paths 3. Visualization of the placement of one activity with activities others [18] Inter-location coordination is also often associated with the ability of a space to interact, in addition to the presence of complementary infrastructure or certain transportation routes [19]. In understanding the coordination between locations and people who carry out activities in it, it is often associated with location theory and connectivity (place and linkage theory). Place Theory talks about how physical space is related to its social and cultural characteristics. The distinctive features of these locations then form an inter-space network that is able to adapt to the framework or structure of natural space functions (linkage theory) [20].
Various factors for defining coordination between locations include distance, space competition opportunities, clustering, and agglomeration between spaces. While in the theory of transportation, integrated location zones can be analyzed regarding traffic circulation that is a separator or separator between locations [19] . Along with the possibility that might arise from the coordination of locations with transportation routes such as congestion or an increasing crime ratio.
Method and Research Sites
This research uses quantitative methods with emphasis on observation techniques in data collection. The research locations are in the education areas of the Diponegoro University (UNDIP) and Polines colleges, starting from Banyuputih Raya street, Lingkar Utara UNDIP street, and Prof. Soedarto street. The analysis technique used is descriptive statistics and scoring analysis. The variables in the study are divided into two, namely: a. Accessibility variable Accessibility variables consist of pedestrian availability, zebra crossing, availability of lighting, bus stops and availability of boundaries between pedestrians and highways (street shoulder).
b. Land Use Variables
The coordination variable between locations is more focused on land use in the Tembalang education area. The development in Tembalang post relocation of the Diponegoro University in 1995 demonstrates gentrification that is identified causes change of social, economic and physical. Gentrification is considered to greatly affect the Tembalang's rapid growth who be able to improve urban services such as facilities and infrastructure [21]. Therefore, community mobility will increase in line with the improvement of facilities and infrastructure. The location of the research used in this study is the Tembalang educational area, where there are several universities, one of which is Diponegoro University and Semarang State Polytechnic.
The research location is directly adjacent to the residential area namely Blimbing Gorge, and Baskoro. There are three streets that are directly adjacent to the research location, namely Prof. Soedarto street, Banyuputih Raya street and Lingkar Utara UNDIP street. At the research location there are public facilities such as health facilities namely Diponegoro National Hospital (RSND), UNDIP gas station, BNI Bank and several shops. Justification of the selection of research locations is a street that is directly adjacent to the campus, where one of the streets is the main access to the campus. The following is a map that describes the location of the study (see Figure 1):
Result
The results of the study and the relationship between variables are samples of observational activities that have been carried out, where the results of this study are not fully correct. In addition to several variables used in this study, there are still other factors that influence human scale on street characteristics in the Tembalang education area that can be studied. However, back to the research objective, that is knowing human scales based on street characteristics in the Tembalang education area, where streets are access to the campus are the main factors to achieve security during mobility in the Tembalang educational area. Therefor with this research it is expected to be able to provide information, whether the Tembalang education area has been able to apply the human scale concept in mobility to the campus. The following is the assessment table of each street corridor in the research location (see Table 1): The researcher has observed in the Tembalang educational area, what is seen is the street network which in some areas has its own characteristics. The description of the characteristic has two meanings, that there are positive and negative values on each street which later affect the application of human scale space. For example, there are streets that have complete infrastructure in conducting mobility, but there are also streets that have deficiencies in facilitating community needs to carry out their activities, for example there are no pedestrians. Accessibility is a variable that is used as the main factor in the application of human scale space in the Tembalang educational area. The success of a travel activity can be measured through security and comfort along the origin to the destination. Based on the results of surveys that have been conducted with regard to street characteristics to reach campus are also vary. In traveling to the campus, it is necessary to pay attention to the interrelationships and interactions of the surrounding space along the street between the location of the house or boarding house to the campus.
On the location of the study there are three streets, namely Prof. Soedarto street, Banyuputih Raya street, and Lingkar Utara UNDIP street. Of the three streets, Prof Soedarto street has the most integrated mobility system. Accessibility variables seen along the street, which are along Prof. Soedarto's street, are pedestrian with a width of 2 meters. In addition, there is a BRT shelter, one of which is the BRT RS Diponegoro bus stop that has proximity to the RSND, the Faculty of Economics and Business campus, and the Medical Faculty campus. So that it makes it easier for students or communities in the campus area to access trips with BRT and on foot.
Second, land use on Prof. Soedarto street tends to vary because there are commercial activities (food stalls, copy centres), health (RSND), education (UNDIP, Polines), and settlements (see figure 2). Based on the land use, it can be ascertained that routine movements will occur. So that every society in the neighborhood around Prof. Soedarto street will definitely make a move every day, especially students who every day move towards the campus.
Third, not only the BRT and pedestrian bus stops are the people's choice in conducting mobility. It is also technological advancements with online transportation such as Go-Jek and Grab that facilitate the connectivity of user mobility after or before heading to the BRT stop location. These choices allow users to use public transportation modes in their entirety, or help reduce walking distance.
The availability of routes that are specifically for pedestrians also varies. For example, pedestrian lines are attempted to have the same height level along the route, and are not interrupted unless there is another street that breaks the main route. Along the pedestrian lane in the Tembalang education area, it has not been fully equipped with a street divider, either in the form of boulevard or plants (1.2 m wide). Where the purpose of the barrier is to reduce direct physical contact between pedestrians and motorized vehicle users. Pedestrian lane which has been equipped with a barrier with a highway in the form of boulvard and plants, one of which is on Prof. Soedarto street in the area on campus, such as the street to the rectorate.
Accessibility to the campus in the morning and evening, for example at 7:00 pm is quite good if it is associated with a dense surrounding activity. Lighting is available every 100 m on pedestrian lines or in Boulvard, the quality of lighting is also quite good, but a lot of insecurity is felt in locations that are quiet activities or land that has not been used in the Tembalang education area. For example, along Banyuputih Raya street up to the Lingkar Utara UNDIP street where there is no activity that creates a sense of insecurity or vulnerability to crime not only felt at night, but also during the afternoon hours when the street is quiet. However, often light dimming occurs in the area towards the campus, so access to the campus is quite dark and vulnerable to crime. When there is light dimming, lighting only comes from vehicles passing through. In addition, there is no CCTV service along the street to the campus.
Of the three streets in the Tembalang education area, researchers considered Prof Soedarto street to be the most effective in creating coordination between spaces compared to other streets. The reason is that although Prof Soedarto street is located along the Undip campus area, the location actually has proximity to the surrounding public space. Along the Prof Soedarto street, BRT Trans Semarang has been served, so there is a bus stop. Even though BRT Trans Semarang has been served, students still use modes of transportation such as motorbikes, cars, online transportation on their way to campus.
Compare with the conditions of the other two streets, Banyuputih Raya street and Lingkar Utara UNDIP street tend to be quieter. The location is also quite far from the center of the crowd, for example, dominated by landfills and protected forests. The existence of the street aims to unravel the traffic flow on Prof. Soedarto street, but the street is considered to lack strong coordination between spaces. As a result, students and communities are reluctant to go to the street and prefer to use practical modes of transportation (online vehicles for example) or private vehicles.
Conclusion
Based on the research that has been done by observing the UNDIP campus area, there are three streets that have different characteristics, namely Prof. Soedarto street, Banyuputih Raya street, and Lingkar
|
2020-01-09T09:16:49.115Z
|
2020-01-02T00:00:00.000
|
{
"year": 2020,
"sha1": "c2d08f963e5af99ea471b3da4045fa5f8e3e9e0d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/409/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8c0eafc54a83d8dbcc1623390694dda40c486aa9",
"s2fieldsofstudy": [
"Geography"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
}
|
17949072
|
pes2o/s2orc
|
v3-fos-license
|
Sarcoidosis: a diagnostic challenge in atypical radiologic findings of unilateral lymphadenopathy
Sarcoidosis is a chronic systemic disease with a wide array of clinical findings. Given that the clinical symptoms are not pathognomonic, chest radiographs have become essential to the initial diagnosis and choice of treatment modality. Diagnosis hinges on ruling out alternative diagnoses; sometimes, advanced radiologic techniques and histopathology are required. On this occasion, we present a case of a patient with generalized symptoms, no significant chest radiograph findings and lymphadenopathy where advanced imaging and pathology assisted in the diagnosis.
INTRODUCTION
Sarcoidosis is a chronic systemic disease that was initially described in 1869 by Jonathan Hutchinson [1]. Yet, to date, the specific cause and etiology remain unclear [2]. While erythema nodosum was thought to be a key feature at the time of its discovery, sarcoidosis is now much more commonly recognized via the trio of hilar lymphadenopathy, pulmonary and ocular involvement [1]. Symptoms are often generalized and can be markedly diverse [3]. The diagnosis is based on the clinical presentation and radiological findings. If the presenting symptoms do not specifically point to Löfgren's syndrome or Heerfordt's syndrome or there is asymptomatic bilateral hilar adenopathy, then the diagnosis is reliant on a biopsy [2]. Atypical presentations often confound the diagnostic effort. Here, we present an atypical presentation of sarcoidosis.
CASE REPORT
Our patient is a 38-year-old Caucasian female with a past medical history of papillary thyroid cancer; she had been treated via thyroidectomy 2 years prior to presentation. She first presented to our outpatient clinic with a 20-pound weight loss over 2 months and a persistent, dry cough.
She had initially presented to the emergency department with a cough, tactile fever and a decreased appetite. She was discharged from the emergency department and given a 6-day course of the antibiotic azithromycin. The following day, the patient developed a Bell's palsy, and she finally returned to the emergency department 1 week later with persistent symptoms and new right-sided neck pain. A chest radiograph was unremarkable. A computed tomography (CT) of the thorax was performed and demonstrated an enlarged lymph node in the aorto-pulmonic window, which had not been seen on prior imaging.
On her initial outpatient visit, the patient had been experiencing a chronic cough, fatigue and a loss of appetite for ∼6 weeks. The only noteworthy physical exam finding was a right facial droop. Initial chemistries showed a thyroid stimulating hormone (TSH) of 0.22 µIU/ml (0.450-4.500 µIU/ml), a total triiodothyronine (T3) of 72 ng/dl (71-180 ng/dl), a free thyroxine (T4) of 2.63 ng/dl (0.82-1.77 ng/dl), a thyroglobulin antibody of <1.0 IU/ ml (0.0-0.9 IU/ml) and a thyroglobulin of <0.1 ng/ml (1.5-38.5 ng/ml). A positron emission tomography (PET) scan showed multiple active lymph nodes in the mediastinum with increased activity in the liver at the junction of the left and right lobes.
An endobronchial ultrasound and video-assisted thoracoscopic surgery were performed. A mediastinal mass (4.0 × 2.5 × 2.4 cm in dimension), a large aorto-pulmonary lymph node and two sub-carinal lymph nodes were removed for permanent pathology analysis.
The cytology showed polymorphous lymphocytes, benign bronchial epithelial cells and alveolar macrophages. The mediastinal mass and lymph node biopsy findings indicated granulomas predominantly non-caseating lymphadenopathy with focal necrosis. Pathology was negative for malignancy.
The remaining diagnostic puzzle was the etiology of the lesion in the liver. A magnetic resonance imaging (MRI) scan of the abdomen with and without contrast was performed illustrating two non-enhancing, 1.3-cm lesions straddling the falciform ligament. The enhancement pattern was atypical for metastasis.
This combination of clinical and pathology findings led to a diagnosis of sarcoidosis. The patient has since followed up with evaluation for pulmonary function studies. No systemic treatment has been initiated at this time. The patient has been followed clinically with mild symptomatic improvement. A repeat MRI will be performed again in 3 months.
DISCUSSION
Sarcoidosis is a chronic systemic disease that continues to have an uncertain etiology. Diagnosis is based on clinical, radiologic and histopathologic findings [1,2]. Broadly described, it is a multisystem inflammatory disease that can affect almost any organ [4]. Sarcoidosis has a strong genetic component and is more prevalent in African Americans with rates 3.6-8 times higher than Caucasians of European descent [4].
The initial presentation typically has pulmonary findings; these are detected in 90% of suspected patients who received chest radiographs [3]. Half of sufferers are typically asymptomatic at the time of diagnosis [3]. The most common symptoms are cough and dyspnea. The most common extra-pulmonary findings include peripheral lymphadenopathy, skin and eye manifestations [2]. Being a systemic disease, thyroid dysfunction has been caused by sarcoidosis with some cases that mimic metastatic thyroid cancer [5,6]. The chest radiograph can be sufficient for diagnosis in the proper clinical contexts [7]. These include asymptomatic hilar lymphadenopathy, Löfgren's syndrome (bihilar lymphadenopathy, arthralgia and erythema nodosum) or Heerfordt's syndrome ( parotitis, Bell's palsy, anterior uveitis and fever). When these pathognomonic sarcoidosis syndromes are identified, a biopsy for diagnostic confirmation is not required [3,7].
The current radiological staging is based on bilateral hilar lymphadenopathy, parenchymal disease and indicators of lung fibrosis. CT scans should be considered with atypical clinical or radiographic findings and evaluation of suspected complications of other concurrent pulmonary diseases [7,8]. PET scans are not recommended for standard workup, but may be necessary in rare circumstances to clarify a difficulty diagnosis. This includes explanation of extrathoracic symptoms and determination of activity of inflammation. In addition, MRI has most often been utilized for evaluation of myocardial sarcoidosis [7].
A typical biopsy classically consists of non-necrotizing granulomas surrounded by lamellar hyaline collagen containing macrophages, epithelioid cells and CD4 + lymphocytes [1,9]. However, these findings by themselves are not specific for sarcoidosis and require correlation with other clinical and radiological findings. Treatment is reserved for symptomatic pulmonary sarcoidosis with parenchymal infiltrates. Additional indications for treatment include, but are not limited to, ocular pain or loss of vision, cardiac arrhythmias, heart failure and other types of endorgan failure [10]. Initial treatment is typically with systemic corticosteroids (usually prednisone) until amelioration of symptoms allows an opportunity to wean the corticosteroids. Anti-metabolite and biologic agents can be considered in cases where there is insufficient clinical improvement or an inability to wean the corticosteroids [10].
The patient presented had a history of thyroid cancer with no epidemiologic or obvious genetic risk factors for sarcoidosis. An initial chest radiograph had only subtle abnormalities. It was fortunate that a PET scan was performed, revealing the multiple active mediastinal lymph nodes, which were later biopsied. Due to the biopsy result, sarcoidosis became the likely diagnosis. Sarcoidosis remains at times a difficult diagnosis, particularly where there are no chest radiograph findings. Careful evaluation and maintaining a wide differential and a high clinical suspicion are needed when other signs are fleeting.
ETHICAL APPROVAL
Conforms to standards currently applied in the country of origin.
|
2018-04-03T02:00:25.021Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "9578183928490505ecaca4dfb0884db26c064f51",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/omcr/article-pdf/2015/12/376/6984390/omv068.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9578183928490505ecaca4dfb0884db26c064f51",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18563645
|
pes2o/s2orc
|
v3-fos-license
|
Optic Neuritis in an Adult Patient with Chickenpox
Central nervous system involvement in a patient with primary infection with Varicella zoster virus is rare, especially in the immunocompetent adult. In particular, isolated optic neuritis has been described in a small number of cases. The authors present a case of optic neuritis in an immunocompetent patient. A 28-year-old woman presented to the emergency room with a history of headaches during the previous week, without visual symptoms. The examination was unremarkable, except for a rash suggestive of chickenpox and hyperemic and edematous optic disc, bilaterally. Visual acuity and neurological examination were normal. Two days later, she complained of pain on eye movement and decreased visual acuity, which was 20/32 in her right eye and 20/60 in her left eye. Four days after admission, her visual acuity started to improve, and two months later, she had 20/20 visual acuity in both eyes. To our knowledge, this is the first reported case of an immunocompetent adult in which a Varicella zoster virus associated optic neuritis presented with fundoscopic changes before decreased visual acuity. This suggests that this condition may be underdiagnosed in asymptomatic patients.
Introduction
Varicella zoster virus (VZV) causes two distinct clinical entities: chickenpox and zoster. Chickenpox is the result of primary infection and is more frequent in children. Reactivation of the virus is more common in adults after the sixth decade of life or in the immunosupressed patient. The most common complication of VZV primary infection is the secondary bacterial infection of cutaneous lesions. Central nervous system involvement is rare, especially in the immunocompetent adult [1]. In particular, isolated optic neuritis has been described in a small number of cases.
We report a case of an adult immunocompetent patient with optic neuritis in the context of primary infection with Varicella zoster virus.
Case Report
A 28-year-old caucasian woman presented to the emergency room with a history of a pressure-type frontal headache during the previous week; there was no exacerbation when lying down nor with Valsalva maneuvers and no accompanying features, namely, visual symptoms. Medical history was unremarkable except for venous thrombosis on her left leg 5 years before. She was not taking any drugs other than oral contraceptives. She had a son who was diagnosed with chickenpox two weeks before. She was febrile (temperature of 38 • C), and there was a rash suggestive of chickenpox. Uncorrected visual acuity was 20/20 in both eyes, and the optic discs were swollen and hyperemic, with ophthalmological and neurological examinations otherwise normal. Two days later, she complained of ocular pain on eye movement and bilateral blurred vision. By that time, examination revealed reduced visual acuity of 20/32 in her right eye (OD) and 20/60 in her left eye (OS) and relative afferent pupillary defect on her left eye. She reported a difference in brightness of the red color, with less saturation in the left eye.
Routine blood testing and immunological workup (ANA, ANCA, anticardiolipin antibodies, and complement levels) were normal. HIV serology was negative as well as VDRL, Toxoplasma, Bartonella, and Borrelia antibodies.
Cerebral computed tomography (CT) scan and magnetic resonance imaging (MRI) were normal.
Cerebrospinal fluid (CSF) examination was unremarkable: clear fluid with a normal opening pressure of 130-140 mm H 2 O with no block when pressing the jugular vein, normal protein and glucose levels, less than 5 leukocytes, and no oligoclonal bands. Antibodies for the following microorganisms were negative in the CSF: Cytomegalovirus, Epstein Barr virus, Parvovirus, Herpes simplex virus, Varicella zoster virus, Mycoplasma pneumoniae, Borrelia burgdorferi, and Leptospira species. VDRL was also negative in the CSF.
Goldmann perimetry showed an enlarged blind spot and constriction of the nasal field, especially in her left eye ( Figure 1). Farnsworth Munsell 100 color vision test was abnormal in both eyes (in the red axis), with more pronounced defect in her left eye.
She was started on acyclovir 4 g/day per os, as is indicated to treat adult chickenpox.
Two days after the initial visual symptoms, visual acuity began to increase, and one week later, the optic disc edema had almost resolved ( Figure 2). Visual acuity continued to improve, and two months later, she had a visual acuity of 20/20 bilaterally. Retinal fluorescein angiography one month after presentation was normal. She has been free of further attacks of optic neuritis for three years after the admission.
Discussion
The main differential diagnosis in the presence of a young woman with a previous history of venous thrombosis, taking oral contraceptives and presenting with headache and swollen optic discs, includes several causes of intracranial hypertension, especially cerebral venous thrombosis. However, the headache was not suggestive of intracranial hypertension, as there was no exacerbation when lying down or with Valsalva maneuvers. CT scan and gadoliniumenhanced MRI, with intracranial venous imaging, were normal, excluding venous thrombosis and parenchymal or meningeal lesions. Also, the CSF was normal, excluding meningeal inflammation and encephalitis, and there was a normal CSF opening pressure without block, excluding high intracranial pressure as the mechanism of the presenting signs and symptoms.
Optic neuritis may be the presenting feature of CNS demyelinating disorders. However, multiple sclerosis rarely presents with simultaneous, bilateral optic nerve involvement. As for neuromyelitis optica, although more frequently bilateral, retrobulbar neuritis is the rule, with a more severe course without complete recovery. Moreover, there were no white matter lesions in the MRI. These disorders did not seem to explain the clinical picture.
The clinical diagnosis of chickenpox was supported by the presence of the characteristic vesicular rash and the close contact with her infected son some weeks before.
After excluding other etiologies of optic neuritis, the diagnosis of optic neuritis associated with Varicella zoster virus was made. Bilateral involvement, normal CSF (including absence of VZV antibodies), and the immunocompetent state of the patient suggested a postinfectious mechanism rather than a direct invasion of the nerve by the virus.
However, we would expect that, being an immunemediated disease, the time between the rash and the decrease in the visual acuity would be longer than the observed in this patient. This could be explained by an earlier exposure to the virus, given that her son had been ill for more than 2 weeks before the patient initiated symptoms. Additionally, there are reports of neurologic disease without rash (zoster sine herpete) [2].
Previous reported cases of optic neuritis following chickenpox in immunocompetent adults had a similar course [3][4][5]. The decrease of visual acuity usually occurs between 2 to 38 days after the onset of the rash and is usually bilateral. Lee and colleagues, however, described a patient with a unilateral anterior optic neuritis that preceded the rash [6].
The pathogenesis of optic nerve involvement by the virus is not well understood. Some authors consider that there is direct nerve invasion in the case of reactivation of the latent infection and an immune-mediated lesion in the primary infection by the virus [7]. In fact, optic neuritis caused by reactivation of a VZV infection is frequently unilateral and associated with orbital inflammation, which suggests a distinct pathogenesis. On the other hand, VZV has not been isolated in the cerebral spinal fluid of immunocompetent patients [4]. This led some authors to hypothesize that the pathogenesis may differ with the state of patient's immunity. Galbussera et al. [4] suggested that in the immunocompetent, the process would be immune-mediated, and in the immunosuppressed, there would be direct viral invasion. In the former case a possible mechanism could be molecular mimicry between viral and neural antigens or incorporation of viral antigens into neural tissue such as cell membranes or myelin sheath in a genetically predisposed patient [3]. Considering this, some authors suggest the use of systemic corticosteroids in immunocompetent adults to accelerate recovery [3,4]. Nevertheless, corticotherapy remains controversial since improvement of visual acuity not only occurs in its absence [4,5,8], but also despite its use, some patients maintain severe visual loss during follow up [3,4]. Additionally, it may, theoretically, exacerbate the direct infection by the virus by diminishing the immune response, a phenomenon that is reported in HIV positive patients [9]. In our patient, we decided not to initiate corticosteroids, since visual acuity began spontaneous recovery 2 days after the beginning of visual symptoms. This is a particularly interesting case because the decrease in visual acuity occurred 2 days after the optic disc changes were diagnosed, and, to our knowledge, this is the first case described with this presentation. In fact, in contrast to the other cases reported, our patient came to the emergency department early, before the appearance of visual symptoms. This finding helps to clarify the disease's natural history. Clinically, this is important because it suggests that one cannot completely exclude an optic neuritis in a patient with a swollen disc and normal visual acuity. This fact also suggests that optic neuritis due to VZV may be under week after the initial visual symptoms. In the right optic disc, the limits are blurred more prominently in the superior and inferior poles. In the left eye optic disc, contour is almost normal. diagnosed in patients that do not develop visual loss and so are not submitted to ocular fundus examination. In immunocompetent patients, this fact does not pose any problem; however, in immunosupressed ones, diagnosing a papillitis before symptoms arise may be important for the treatment and prognosis. Further studies are needed to clarify this issue.
|
2016-05-16T05:05:07.192Z
|
2012-12-19T00:00:00.000
|
{
"year": 2012,
"sha1": "53bcfb419357108c84edbd5c2b9a7765c9c46315",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/criopm/2012/371584.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53bcfb419357108c84edbd5c2b9a7765c9c46315",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8271582
|
pes2o/s2orc
|
v3-fos-license
|
A deep ATCA 20cm radio survey of the AKARI Deep Field South near the South Ecliptic Pole
The results of a deep 20 cm radio survey at 20 cm are reported of the AKARI Deep Field South (ADF-S) near the South Ecliptic Pole (SEP), using the Australia Telescope Compact Array telescope, ATCA. The survey has 1 sigma detection limits ranging from 18.7--50 microJy per beam over an area of ~1.1 sq degrees, and ~2.5 sq degrees to lower sensitivity. The observations, data reduction and source count analysis are presented, along with a description of the overall scientific objectives, and a catalogue containing 530 radio sources detected with a resolution of 6.2"x 4.9". The derived differential source counts show a pronounced excess of sources fainter than ~1 mJy, consistent with an emerging population of star forming galaxies. Cross-correlating the radio with AKARI sources and archival data we find 95 cross matches, with most galaxies having optical R-magnitudes in the range 18-24 mag, and 52 components lying within 1"of a radio position in at least one further catalogue (either IR or optical). We have reported redshifts for a sub-sample of our catalogue finding that they vary between galaxies in the local universe to those having redshifts of up to 0.825. Associating the radio sources with the Spitzer catalogue at 24 microns, we find 173 matches within one Spitzer pixel, of which a small sample of the identifications are clearly radio loud compared to the bulk of the galaxies. The radio luminosity plot and a colour-colour analysis suggest that the majority of the radio sources are in fact luminous star forming galaxies, rather than radio-loud AGN. There are additionally five cross matches between ASTE or BLAST submillimetre galaxies and radio sources from this survey, two of which are also detected at 90 microns, and 41 cross-matches with submillimetre sources detected in the Herschel HerMES survey Public Data release.
INTRODUCTION
A fundamental challenge in contemporary astrophysics is to understand how the galaxies have evolved to their current form.To address this issue, wide area surveys are required to accumulate large statistical samples of galaxies.To study this question, the Japanese AKARI infrared satellite (Murakami et al. 2007) carried out two deep infrared legacy surveys close to the North and South Ecliptic Poles (Matsuhara et al. 2006, Matsuura et al. 2011), which are notable because their sight-lines to the distant Universe have the advantages of low extinction and correspondingly small Hydrogen column densities.To support the two AKARI Deep Fields, sensitive radio surveys have been made of both ecliptic pole regions to study and compare the global properties of the extragalactic source populations (White et al. 2009, White et al. 2010a [hereafter 'Paper 1']).In the present paper the results are reported of a sensitive radio survey at 1.4 GHz using the Australia Telescope Compact Array (ATCA) of a region that includes both the ADF-S field (Matsuhara et al. 2006, Wada et al. 2008, Shirahata et al. 2009, White et al. 2009, Matsuura et al. 2009, 2011), as well as a more extended region around it.The ADF-S is the focus of a major multiwavelength observing campaign conducted across the entire spectral region.The combination of these far-infrared data and the depth of the radio observations will allow unique studies of a wide range of topics including the redshift evolution of the luminosity function of radio sources, the clustering environment of radio galaxies, the nature of obscured radio-loud Active Galactic Nuclei (AGN), and the radio/far-infrared correlation for distant galaxies.
MULTI-WAVELENGTH OBSERVATIONS
The AKARI ADF-S field is a region located close to the South Ecliptic Pole (Matsuura et al. 2009(Matsuura et al. , 2011) ) with a very low cirrus level ≤ 0.5 MJy sr −1 (Schlegel et al. 1998, Bracco et al. 2011), and correspondingly low Hydrogen column density ∼ 5×10 19 cm −2 .This field is similar to the well known Lockman Hole and Chandra Deep-Field South regions, and has half of the cirrus emission of the well studied COSMOS field at 24µm.The ADF-S field is therefore one of the best 'cosmological windows' through which to study the distant Universe (Malek et al. 2009, Matsuura et al. 2011, Hajian et al. 2012), and is now of high priority for astronomers to build ancillary data sets that can be compared with the AKARI data, and to prepare lead on to the next set of deep cosmological surveys, such as those that will be provided by Herschel (Pilbratt et al. 2010) and SPICA (Eales et al. 2009, Swinyard et al. 2009).
The AKARI ADF-S survey was primarily made in the farinfrared at wavelengths of 65, 90, 140, 160 µm over a 12 deg 2 area with the AKARI Far-Infrared Surveyor (FIS) instrument (Kawada et al. 2007), with shallower mid-infrared coverage at 9, 18µm using the AKARI Infrared Camera (IRC) instrument (Onaka et al. 2007).In addition to the wide survey, deeper mid-infrared pointed observations, using the IRC, covering ∼ 0.8 deg 2 and reaching 5σ sensitivities of 16,16,74,132,280 and 580 µJy at 3.2,4.6,7,11,15,24 µm were also carried out.At other wavelengths, the region has recently been mapped by Spitzer's Multi-band imaging photometer (MIPS) at 24 and 70 µm (Scott et al. 2010, Clements et al. 2011); by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST) at 250, 350 and 500 µm (Valiante et al. 2010), the latter revealing ∼200 sub-millimetre galaxies over an 8.5 deg 2 field; and in the ground-based submillimetre band by Hatsukade et al. (2011) revealing 198 potential sub-millimetre galaxies in an ∼0.25 square degree area.The ancillary data sets summarised in Table 1 will be used in calibration of the radio positional reference frame, and for cross-identifications later in this paper.The AKARI sensitivity limits correspond approximately to being able to detect starburst galaxies and AGN with a luminosity of 10 12 L ⊙ at z = 0.5, or ultraluminous infrared galaxies (ULIRGS) with luminosities 10 12−13 L ⊙ at z = 1-2 respectively.Note that the ADF-S has also been observed by the Herschel Space Observatory (HSO) (Pilbratt et al. 2010) as part of the Herschel Multi-tiered Extragalactic Survey (HerMES) guaranteed time key program (Oliver et al. 2010).
Optical, radio, X-ray and infrared surveys provide essential support to the interpretation of deep extragalactic radio surveys.The ADF-S has been the focus of recent multi-wavelength survey coverage by our team, with optical imaging with the CTIO 4m telescope (MOSIAC-II detector) to an R-band sensitivity of 25 magnitudes, and at near-IR wavelengths to K ∼18.5 magnitudes with the IRSF/SIRIUS instrument already completed.To support the ADF-S and ATCA surveys, we have separately obtained wide field imaging in the optical and near-IR at ESO (using WFI and SOFI), at the AAT (using WFI and IRIS2), for fields of 0.5 -1 square degree, and spectroscopic observations using AAOmega on the AAT (Sedgwick et al. 2009(Sedgwick et al. , 2011)).
ATCA observations
The radio observations were collected over a 13 day period in July 2007 using the ATCA operated at 1.344 and 1.432 GHz.The total integration time for the 2007 observations was 120 h, spread between 26 overlapping pointing positions to maximise the uv coverage and to mitigate the effects of sidelobes from nearby radio-bright sources.Two of the pointing positions were observed on each night, by taking one five minute integration at each of the two target fields, followed by a two minute integration on the nearby secondary calibrator 0407-658.This cycle was repeated for the different pointing positions, which were observed over ∼10 hour tracks each night, giving similar uv-coverage for each target field.The amplitude scaling was bootstrapped from the primary calibrator PKS 1934-638, which was observed for 10 min at the start of each observing night, and which was assumed to have a flux density of 15.012 Jy at 1.344 GHz and 14.838 Jy at 1.432 GHz respectively.The 2007 data were augmented with a further deep observation made in December 2008 over 5 nights toward a single pointing position at the ADF-S, which lay just off centre of the larger ATCA-ADFS field reported here.This added a further 50 hours of integration time.The data were processed in exactly the same way as that from the 2007 observing sessions.
Calibration
In the following sub-sections the calibration and data reduction methodology are presented.Since much of this is in common with our recent North Ecliptic Pole (NEP) radio survey with the Westerbork Synthesis Radio Telescope (WSRT) telescope discussed in Paper 1, we will not repeat the detailed discussion of this earlier paper, but instead just focus on those parts of the calibration methodology that differed from Paper 1.The data were calibrated using the ATNF data reduction package MIRIAD (Sault et al. 1995) using standard procedures.The raw data come in RPFITS format, and were converted into the native MIRIAD format using ATLOD.ATLOD discards every other frequency channel (since they are not independent from one another, hence no information is lost), and additionally flagged out one channel in the higher frequency sideband which contained a multiple of 128 MHz, and thus was affected by self-interference at the ATCA.Channels at either end of the sidebands where the sensitivity dropped significantly were also not used.The resulting data set contained two sidebands, with 13 and 12 channels respectively, each 8 MHz wide, which resulted in a total bandwidth of 200 MHz.The lower frequency sideband was mostly free of RFI and required little editing apart from flagging of bad data.However, the higher frequency sideband suffered from occasional local RF interference, and the affected data were flagged out using the ATNF automated noise flagger PIEFLAG (Middelberg 2006), which eliminated virtually all of the RFI-affected data which would have been flagged in a visual inspection.A visual inspection of the visibilities after using PIEFLAG, led to the removal of a few other small sections of RFI-affected data.In total, approximately 3% and 15% of the data were flagged out in the lower and higher bands respectively.Phase and amplitude fluctuations throughout the observing run were then corrected using the interleaved secondary calibrator data, and the amplitudes were scaled by bootstrapping to the primary calibrator.The data were then split by pointing position and each field was individually imaged, before mosaicing to form a master image, sensitivity and noise maps.
Imaging
The data for each of the target pointings were imaged separately using uniform weighting and gridded to a pixel size of 2.0 ′′ to a common reference frame (to minimise geometrical issues in the mosaicing process).The twenty-five 8 MHz wide frequency channels across the ATCA passband were reduced using MIRIAD's implementation of multi-frequency clean, MFCLEAN, which accounts for variation in the spectral index of the calibration sources across the observed bandwidth.After a first iteration of MFCLEAN, model components with flux densities ≥ 1 mJy beam −1 were used to phase self-calibrate, and to correct residual phase errors.The data were then re-imaged and CLEANED for 5000 iterations, at which point the sidelobes of strong sources were generally found to be comparable with the thermal noise, except for a few cases adjacent to bright sources.The individual pointings were then mosaiced together using the MIRIAD task LINMOS, which additionally divides each image by a model of the primary beam attenuation, and uses a weighted average of positions contained in more than one pointing.As a result, pixels at the mosaic edges have a higher noise level.Regions beyond the point where the primary beam response drops below 50% (this occurs at a radius of 35.06 ′ from the centre of a pointing) were blanked, which resulted in a total survey area of 1.04 deg. 2 (to the limit of the half power beam width at the edges of the master image).The synthesised beam size in the final mosaiced image was 6.2 ′′ × 4.9 ′′ at a position angle of 0 degrees.The sensitivity varies across the image due to primary beam attenuation and the mosaicing strategy as shown in Figure 1, although the noise level achieved across the map is ∼35% higher than expected for a thermal noise limited survey, which is due to difficulties in removing the sidelobes of strong sources at the edge of the survey field.This is a well known situation that has previously been seen both for ATCA and WSRT radio surveys, and probably results from both the non-circularity of the telescope beam, and small movements of the primary beam on the sky caused by random single dish pointing errors (i.e.due to wind/thermal loading) that cause the intensity of bright sources near the edge of the primary beam to vary significantly during an integration, making it difficult to efficiently CLEAN those areas.
SOURCE COMPONENT CATALOGUE
The mosaiced region achieves wide-field coverage and good sensitivity at the price of having an unavoidably non-uniform noise distribution.Statistical characterisation of the completeness of detection at various flux levels is therefore a complex procedure that requires accounting for the observing time, mosaic overlap, and primary beam attenuation.Our source detection was made using locally determined noise levels derived from the noise map (Figure 2) -an approach that has already been used in other studies to improve the efficacy of their source detection catalogues (e.g.Hopkins et al. 1998, Morganti et al. 2004, Paper 1, and the associated NEP component catalogue presented in White et al. 2010b).The component catalogue in this paper was built using the MIRIAD task SFIND in a similar way to that described in Paper 1.However, briefly SFIND uses a statistical technique, the false discovery rate (FDR), which assigns a threshold based on an acceptable rate of false detections (Hopkins et al. 2002).For the ATCA-ADFS data the approach of Hopkins et al. (2002) was followed by adopting an FDR value of 2%.The components identified by SFIND were visually inspected to remove any obvious mis-identifications (e.g. a few residual sidelobe structures immediately adjacent to the brightest components in the mapped region).Comparison with independent catalogues derived using the MIRIAD task IMSAD (with a 7 σ clip), and with one derived using SExtractor (Bertin & Arnouts 1996) with a locally defined background rms were almost identical with the SFIND catalogue.Hopkins et al. (1998) show that using SFIND in this way provides a very robust estimate of the noise level above which there are almost no spurious positive candidates, with the completeness being robustly set by the choice of FDR, and the locally determined background noise level.An understanding of source confusion, spurious components, sensitivity and completeness are important in any survey that is analysed to its limit, but as this becomes difficult to rigorously establish for mosaiced images with non-uniform noise properties of our mosaic and the fact that some but not all of the components are resolved, it was decided for the source counts analysis in Section 5 to stop the calculation at the very conservative level of 200 µJy, which corresponds to in excess of 10σ signal to noise in the most sensitive parts of the mapped region.
A sample from the final component catalogue is presented in Table 2, and the entire catalogue is included in the electronic online version of this paper.
The positional accuracy listed in the Table 2 is relative to the self-calibrated and bootstrapped reference frame described in Section 3. Other effects that bias the positions or sizes of sources in radio surveys have already been presented in Paper 1, to which the (1) a short form running number (components that are believed to be parts of multi-component sources are listed with a † sign next to the running number (for example 47 † ), with more details about these multi-component sources being presented in Table 3, (2) the component name, referred to in this paper as ATCA-ADFS followed by the RA/Dec encoding (e.g.ATCA-ADFS J045243-533127), (3,4) the component Right Ascension and Declination (J2000) referenced from the self-calibrated reference frame, (5,6) the RA and Dec errors in arc seconds, (7,8) the peak flux density, S peak , and its associated rms error, (9,10) the integrated flux densities, S total and their associated errors, (11,12,13) the size along the major and minor axes of the fitted Gaussian component profile and its orientation (the major and minor axes refer to the full width at half maximum component size deconvolved from the synthesised beam, and position angle was measured east of north.Component sizes are shown in columns 11 or 12 only for the cases where S total /S peak ≥ 1.3, as an indicator of a resolved component.Components where S total /S peak < 1.3 were considered to be unresolved, and therefore component sizes are not individually reported for these here.All components were additionally checked visually to mitigate against artefacts that might have slipped through the various checks.
No
Component reader is referred.An estimate of component dimensions calculated by deconvolving the measured sizes from the synthesised beam is also presented, with Table 2 reporting only those more than double the synthesised beam size.
Component extraction
In the terminology of this paper, a radio component is described as a region of radio emission represented by a Gaussian shaped object in the map.Close radio doubles are represented by two Gaussians and are deemed to consist of two components, which make up a single source.A selection of radio sources with multiple components is shown in Figure 3.
Complex sources
Radio sources are often made up of multiple components, as seen in Figure 3.The source counts need to be corrected for the multicomponent sorces, so that the fluxes of physically related components are summed together, rather than being treated as separate sources.Magliocchetti et al. (1998) have proposed criteria to identify the double and compact source populations, by plotting the separation of the nearest neighbour of a component against the summed flux of the two components, and selecting components where the ratio of their fluxes, f 1 and f 2 is in the range 0.25 ≤ f 1 / f 2 ≤ 4. In Figure 4 the sum of the fluxes of nearest neighbours are plotted against their separation.The dashed line marks the boundary satisfying the separation criterion defined by Magliocchetti et al. (1998): where θ is in arc seconds.Therefore 53 radio sources in the present survey (i.e.10% of the 530 catalogued entries) should be considered to be a part of double or multiple sources according to the Magliocchetti et al. (1998) criterion, and this will be taken account of in the source counts discussed later.These components, and their suggested associations are listed in Table 3.
Flux density and positional accuracy
The flux density and positional accuracy are presented in Table 2, and the method for calculating the positional accuracy are described in Hopkins et al. (2002), and the intensity scales are derived and fully described in Equations 1-5 of Hopkins et al. (2003).Since the methods for measuring the positional and intensity scale accuracy form part of the methodology of the SFIND technique, the reader is referred to the papers presenting this technique, rather than repeating them here.However, to check the positional accuracy, the ATCA data were cross correlated against the SUMSS survey (Mauch et al. 2003), where 8 of the bright ATCA sources were The contours are at 0.0001, 0.0003, 0.0005, 0.001, 0.003, 0.006, 0.012, 0.024, 0.048 and 0.096 Jy beam −1 respectively.The Right Ascension/Declination scales can de derived using the component locations in Table 2.
found to be within 10 ′′ of a SUMSS source (the SUMSS halfpower beam width is 45 ′′ × 57 ′′ ).After eliminating three components which are resolved and appear as double radio sources in the ATCA data, the average offset between the positions in the two catalogues (ATCA-SUMSS) was (∆RA,∆Dec) = (+0.43′′ ± 2.31 ′′ , -2.57′′ ±2.56 ′′ ), which are consistent with the absolute and systematic errors reported in the SUMSS Catalogue.The ATCA component catalogue was also cross-correlated with the positions of bright compact optical galaxies from our CTIO MOSAIC-II survey (see Table 1), which was astrometrically referenced against HST guide stars, and sources from the DENIS database.The mean of the offsets to the 166 bright galaxies shown in Figure 5 was ∆ RA = -0.16′′ ± 0.37 ′′ and ∆ Dec = -0.05′′ ± 0.46 ′′ , which is also consistent with the SUMSS result.
Summary of flux density corrections for systematic effects
There are two main systematic effects which have been taken into account to estimate the ATCA flux densities, specifically clean bias and bandwidth smearing effects.Bandwidth smearing is the radio analog of optical chromatic aberration, resulting from the finite width of the receiver channels compared to the observing fre-quency.It reduces the peak flux density of a source while correspondingly increasing, or blurring, the source size in the radial direction such that the total integrated flux density is conserved, but the peak flux is reduced.
From Condon et al. (1998) the reduction in the peak flux from a compact radio source as a result of bandwidth smearing is given by: where S peak and S 0 peak refer to the off-axis peak flux and the peak flux at the centre of axis of the primary beam,∆ν and ν are the bandwidth and observing frequency respectively, d is the offaxis distance, and θ b is the synthesised beamwidth.Prandoni et al. (2000a) have made a detailed study of this for the ATCA telescope, finding similar behaviour.
Fortunately, the closely spaced mosaicing strategy used for the ATCA SEP observations allows the smearing effect to be measured directly, by monitoring peak and integrated flux densities of four bright compact sources that were present in virtually every one of the observed fields, but at different distances from the centre of the beam.Figure 6 shows the measured smearing factor k, which we define as the ratio of the peak flux of a compact source normalised The area used for the differential source count estimation in Section 5 is shown as a solid line, and has a maximum value of 1.04 degree 2 , whereas that of the full image (whose radio components are listed in Table 2) is indicated by the dot-dash line and has a maximum value of 2.55 degree 2 .are: (1) the components identified according to their Running Numbers in the main catalogue, (2,3) mean Right Ascension and Declination (J2000) taken and the average of the positions of the individual components, (4) the distance between the components (rounded up to the nearest arc second), ( 5) the sum of the total flux density of the individual components, (6) the error on this, estimated by adding the total flux errors in quadrature.to that which it has when at the centre of a beam, as a function of distance from the beam centre.This Figure shows that the experimental data points are reasonably well fit by the theoretical relationship of Condon et al. (1998), which is overlaid as a solid line on Figure 6.This correction was taken into account when estimating the peak fluxes listed in Table 2.
Components
It is well known that as well as needing to consider this effect for single pointings, large mosaiced fields and chromatic aberration, it can also act to reduce point source fluxes in a complex way (e.g.Bondi et al. 2008, White et al. 2010).We have empirically examined the effect of bandwidth smearing on our mosaiced data by following the approach adopted by Bondi et al. (2008) to compare the fluxes of bright sources observed close to the centres of individual pointings, with their fluxes determined after mosaicing together to form a merged image.Although the bandwidth smearing can be accounted for using the above equation from Condon et al. (1998), as Bondi et al. (2008) discuss, the contribution of this to measurements of the peak fluxes in radio surveys is more difficult to rigorously quantify for mosaiced data, where the smearing would need to be modelled with a more complicated function that represents the spacing pattern of the individual pointings.Due to the difficulty in rigorously calculating this, we have therefore followed the Bondi et al. (2008) approach to estimate the most probable reduction to the peak flux densities, as this correction will slightly modify estimates of the estimated source sizes.
Therefore, we ran the same procedure that was used to produce the final radio catalogue, on each of the individual pointings.For the strongest unresolved sources ( > ∼ 1 mJy) the peak and total flux densities measured from the final mosaiced image, were compared with the corresponding peak and total flux densities from the individual pointings, using sources that were no further than 5 ′ away from an individual pointing (this is consistent with the Bondi et al. 2008 approach for the VLA, which is supported for ATCA from our own results shown in Fig. 7).The total flux densities of each source in the mosaic were in good agreement (the median value was measured to be 1.01 with an rms dispersion of 0.02), as would be expected for complete recovery of the flux.However, for components whose peak fluxes are affected by bandwidth smearing, the peak fluxes could be underestimated in the final mosaic on average by up to 20%.The peak fluxes listed in Table 2 have all been corrected for this, according to the centre of the mosaiced image of Right Ascension (J2000) = 4 h 46 m 46 s .5,Declination (J2000) -53 • 24 ′ 59 ′′ .0.
The other main effect that can influence fluxes is clean bias.Radio surveys, and in particular those consisting of short snapshot observations, have a tendency to be affected by the clean bias effect where the deconvolution process leads to a systematic underestimation of both the peak and total source fluxes.This is a consequence of the constraints on the cleaning algorithm due to sparse uv coverage (see Becker et al. 1995, White et al. 1997, Condon et al. 1998), and has the effect of redistributing flux from point sources to noise peaks in the image, reducing the flux density of the real sources.As the amount of flux which is taken away from real sources is independent of the source flux densities, the fractional error this causes is most pronounced for weak sources.Prandoni et al. (2000a, b) have shown that it is possible to mitigate clean bias if the CLEANing process is stopped well before the maximum residual flux has reached the theoretical noise level.Consequently the cleaning limit was set at 5 times the theoretical noise, to ensure that the clean bias does not significantly affect the source fluxes in the present survey (Garrett et al. 2000).Gruppioni et al. (1999) adopted a similar strategy in an ATCA survey of the ELAIS N1 field, and found the effect to be insignificant (less than 2.5%) for the faintest sources (5σ detections) but had no effect on sources brighter than 10σ for similar numbers of CLEAN cycles as those performed on the present ATCA data.We therefore conclude that clean bias will have a negligible affect on the present data.4 is the same as that used by Kollgaard et al. (1994).
DIFFERENTIAL COUNTS
In Figure 8 the differential radio source counts are shown from the ATCA-ADFS field, normalised to a static Euclidean universe (dN/dS S 2.5 (sr −1 mJy 1.5 )).These source counts are broadly consistent with previous results at 1.4 GHz (e.g. the compilation of Windhorst et al. (1993), the PHOENIX Deep Survey (Hopkins et al. 2003), and the shallow NEP survey of Kollgaard et al. (1994)).
The data from Figure 8 are given in Table 4, where the integrated flux bins and mean fluxes for each of the bin centres are listed in columns (1 and 2), the number of sources corrected for clean and resolution bias are shown in column (3), and the number of sources corrected for the area coverage and multi-component sources in Column 4, and in the final column (5) the differential source counts and their associated errors as defined by Kollgaard et al. (1994) are listed.
To model the observed source counts a two component model was used consisting of a classical bright radio loud population and a fainter star-forming population.It is well established that classical bright radio galaxies require strong evolution in order to fit the observed source counts at radio wavelengths (Longair 1966, Rowan-Robinson 1970).The source counts above 10 mJy are dominated by giant radio galaxies and QSOs (powered by accretion onto black holes, commonly joined together in the literature under the generic term AGN).Radio loud sources dominate the source counts down to levels of ∼1 mJy, however, at the sub-mJy level the normalised source counts flatten as a new population of faint radio sources emerge (Windhorst et al. 1985).The dominance of starburst galaxies in the sub-mJy population is already well established (Gruppioni et al. 2008), where the number of blue galaxies with star-forming spectral signatures is seen to increase strongly.Rowan-Robinson et al. (1980,1993), Hopkins et al. (1998), and others have concluded that the source counts at these faintest levels require two populations, AGNs and starburst galaxies.This latter population can best be modelled as a dusty star-forming population, under the assumption that it is the higher redshift analogue of the IRAS star-forming population (Rowan-Robinson et al. 1993, Pearson & Rowan-Robinson 1996).In this scenario, the radio emission originates from the non-thermal synchrotron emission from relativistic electrons accelerated by supernovae remnants in the host galaxies.
To represent the radio loud population the luminosity function of Dunlop & Peacock (1990) was used (parameters in Table C3 in their paper) to model the local space density with an assumption that the population evolves in luminosity with increasing redshift.The luminosity evolution follows a power law with redshift of (1 + z) 3.0 , broadly consistent with both optically and X-ray selected quasars (Boyle et al. 1987).The spectrum of the radio loud population was obtained from Elvis, Lockman & Fassnacht (1994), assuming a steep radio spectrum source of (S ν ∝ ν −α , α=1).
To model the faint sub-mJy population we use the IRAS 60 µm luminosity function of Saunders et al. (2000), with the parameters for the star-forming population, defined by warmer 100 µm / 60 µm IRAS colours, given in Pearson (2001Pearson ( , 2005)), and Sedgwick et al. (2012).
To convert the infrared luminosity function to radio wavelengths, we derive below the ratio of the 60 µm luminosity to the radio luminosity, from the well established correlation between the far-IR and radio flux (e.g.Helou, Soifer & Rowan-Robinson (1985), Yun, Reddy & Condon (2001), Appleton et al. (2004)).Helou et al. (1985) defined this relation between the farinfrared flux, FIR/Wm −2 and the 1.4 GHz radio emission, S 1.4GHz /Wm −2 Hz −1 in terms of the q factor given by, The far-infrared flux defined by Condon (1991) in terms of the 60 µm and 100 µm emission can be written as, where the spectrum between 60 µm and 100 µm is defined by a spectral index α, substituting the above relation into Equation 3, assuming a value of q=2.3 (Condon 1991(Condon ,1992) and a value of α=2.7 (Hacking et al. 1987), it is then easy to show that: To convert the infrared luminosity function to radio wavelengths we adopt the above S 60 µm /S 1.4 GHz ratio.We utilise the spectral template of the archetypical starburst galaxy of M82 from the models of Efstathiou, Rowan-Robinson & Siebenmorgen (2000) for the spectral energy distribution of the star-forming population.The radio and far-infrared fluxes are correlated due to the presence of hot OB stars in giant molecular clouds that heat the surrounding dust producing the infrared emission.These stars subsequently end their lives as supernovae with the radio emission powered by the synchrotron emission from their remnants.The radio spectrum is characterised by a power law of (S ν ∝ ν −α , α=0.8).
Pure luminosity evolution for the star-forming population is assumed with a best fit power law ∝ (1 + z) 3.2 .This infrared representation of the star-forming population was preferred over using the radio luminosity function directly, since it creates a phenomenological link between the radio emission and the infrared which is responsible for the bulk of the emission in the star-forming population.The observed number counts at fainter fluxes (<1mJy) vary widely from survey to survey resulting in a distribution of the best fitting evolution parameterisation.Huynh et al. (2005) used the radio luminosity function of Condon et al. (2002) and derived a best fitting evolution parameterisation ∝ (1 + z) 2.7 , slightly lower than the work presented here.Hopkins (2004) and Hopkins et al. (1998) used radio and infrared luminosity functions respectively obtaining evolution in the sub-mJy population ∝ (1 + z) 2.7 and ∝ (1 + z) 3.3 respectively.Comparing our observations and assumed evolution with the results of our survey in the AKARI deep field at the North Ecliptic Pole (Paper 1) we find that our derived evolution for the AGN and star forming components ((1+z) 3.0 , (1+z) 3.2 respectively) are consistent with the values arrived at for the survey at the North Ecliptic Pole ((1+z) 3.0 for both components).Both of our surveys at both ecliptic poles (each covering areas of > ∼ 1 deg 2 , similar to the VLA-COSMOS survey of Bondi et al. (2008) and larger than the other surveys depicted in Figure 9) result in number counts at the lower end of the emerging picture on excess sub-mJy radio counts, as shown in Figure 9.
CROSS-MATCHES WITH DEEP FIELD CATALOGUES AT OTHER WAVELENGTHS
To compare the ATCA radio catalogue with the AKARI FIS 90 µm ADF-S catalogue (Shirahata et al. in preparation), we have crossmatched AKARI sources and radio components to search for positional coincidences ≤6 ′′ based on the AKARI positional uncertainty as defined in Verdugo et al. (2007) and the radio components reported in this paper.In the case of the possible double or complex radio sources (see Figures 4 and 3) we have also searched for candidate identification along a line joining the presumed associated radio components.respectively.From this cross matching we recovered 35 sources in common to both catalogues, twenty-five of which are also reported in the Spitzer 70 µm catalogue (Clements et al. 2011).We list the ATCA-AKARI cross-matched sources in Table 5, along with Rband detections from our CTIO MOSAIC-II survey (or in a few cases that are marked with a dagger symbol from DENIS R-band fluxes), and redshifts from the AAT/AAOmega redshift survey.The Herschel data are described in the Figure caption of Table 5.The 41 cross matched sources, all of which lay within a 5 ′′ error circle, the mean positional agreement was 1.93±1.25 ′′ , showing very good agreement of the coordinate systems.
Infrared cross matches (AKARI, Spitzer)
Figure 10 shows the comparison between the fluxes of matched ATCA radio -90 µm sources detected in our survey as well as a larger sample of radio and 90 µm fluxes taken by cross-correlating with the AKARI All-Sky Survey FIS catalogue (Yamamura et al. 2010, Oyabu et al. 2009, 2010) with the compilation of radio sources given by Dixon (1970).Although this Figure does not apply a K-correction to the measured fluxes, it does show us that although many of the ATCA-ADFS sources fall on an extrapolation of sources from Dixon's list to lower fluxes, several of them may be radio loud compared to the majority (in other words lie significantly to the right of the trend line), and which therefore may have active nuclei.Of these, the two most extreme are the following.Firstly, ATCA component 18 (J04421266-5355520) at redshift 0.044 appears on the NED extragalactic database as a bright edge- The radio identifications in Table 2 were cross-correlated with the Spitzer 24 µm and 70 µm catalogues (Scott et al. 2010), finding 173 and 31 matches at 24 µm and 70 µm respectively, using the Spitzer single pixel size (2.45 ′′ and 4.0 ′′ at 24 µm and 70 µm respectively) as the search radius.The results of the 24 µm crossmatches are shown in Figure 11, and the large scatter of the plots highlights the difficulties of using the 24 µm fluxes as indicators of the radio flux.This plot resembles that of Norris et al. (2006) showing a wide dispersion.To check for chance associations, the radio coordinates were incremented by 60 ′′ in both RA and Dec, and this new list of positions was cross-correlated with the Spitzer data to simulate what should be blank fields, resulting in 5 matches.Assuming that these are chance associations, the majority of the matched components (≥ 97%) are likely to be real associations.The brightest Spitzer source shown in Figure 11 is ATCA component 187, which . is associated with an R = 17.7 magnitude galaxy, and has a redshift of 0.121 (see Table 5).The ATCA components with Spitzer detections which have flux densities ≥ 10 mJy are 11, 12, 155, 160, 236, 446, 448, 458, 530.
Radio luminosity
The radio luminosity of the sources listed in Table 5 was calculated, assuming a cosmology of H 0 = 70 km s −1 Mpc −1 , with matter and cosmological constant density parameters of Ω M = 0.3, Ω Λ = 0.7.The redshifts were measured using AAOmega, the fibre-fed optical spectrograph at the Anglo Australian Observatory as described by Sedgwick et al. (2011), and the resultant plot of the radio luminosity against redshift is shown in Figure 12, where we assume a mean 5. radio spectral index of α = -0.7 (where S∝ ν α ) and apply the usual form of the k-correction κ(z) = (1+z) −(1+α) at redshift z.
From studies of the local 1.4 GHz luminosity function, Sadler et al. (2002) and Mauch & Sadler (2007) have shown that the low luminosity population with radio luminosity ∼ 10 23 W Hz −1 will mostly be luminous star-forming galaxies rather than radio-loud AGN (Eales et al. 2009, Jarvis et al. 2010, Hardcastle et al. 2010).Although most of the cross-matched ATCA/AKARI sources shown in Fig. 10 fall on the trend shown from the wider sample of cross matches between AKARI and Dixon's catalogue, ATCA components 18 and 187 appear to be radio-loud, although at the lower end of the luminosities reported from the local luminosity function for this class.However, since most of the other cross-matches fit the trend-line, we conclude that the ATCA/AKARI cross identifications primarily trace the star-forming galaxy population.
Infrared Colours
To further investigate the nature of the ATCA/AKARI population we compare the infrared colours of our components from our ATCA survey with cross matches in both the AKARI 90 µm and Spitzer 24 µm & 70 µm bands.In Figure 13 we plot the 24 µm /90 µm -90 µm /70 µm colour distribution of our sources.The models were derived from the SEDs smoothed by the filter bands, and are overlaid onto the spectral tracks of an ensemble of star-forming galaxies from the models of Efstathiou et al. (2000) with increasing far-infrared luminosity from L IR > 10 10 L ⊙ , 10 11 L ⊙ , 10 12 L ⊙ , together with the spectral track of an AGN torus from the models of Efstathiou & Rowan-Robinson (1995).From Figure 13 the infrared colours of the ATCA/AKARI population are consistent with those of star-forming galaxies, although there are selection effects (the requirement to have a 90 µm cross-match) which may bias this, and which would need to be tested with more sensitive infrared observations.
As a further check, the line ratios of [OIII]/H-β (lines at wavelengths of 486.1, 495.8, 500.7nm)and [OIII]/[OII] (OII doublet at 372.7 nm) were checked from the AAOmega spectra for the sources 5.The black squares represent the catalogue sources, with the spectral tracks of an ensemble of star-forming galaxies with increasing far-infrared luminosity from L IR > 10 10 L ⊙ , 10 11 L ⊙ , 10 12 L ⊙ , and a spectral track from an AGN also plotted.The large crosses show the zero redshift points with the further smaller crosses corresponding to steps in redshift of 0.1.with redshifts >0.1, with the result that only one component (ATCA 302) shows ratios that are close to typical AGN values (Sedgwick et al. in preparation).Therefore the radio luminosities (Figure 12), infrared colour-colour plots (Figure 13) and AAOmega spectra all show a consistent picture, suggesting that the ATCA/AKARI crossidentifications predominantly trace a star-bursting population.
Optical identifications
The positions of the components listed in Table 5 were compared with those in the CTIO MOSAIC-II survey (see Table 1), taking a maximum search radius of 1 ′′ (based on the offsets to bright radio sources described in Section 4.3.The number of galaxies as a function of R-magnitude was calculated from the CTIO MOSAIC-II survey, which covers an area of size 1.84 × 0.64 degrees centred at RA (J2000) = 4 h 43 m 32.8 s , Declination (J2000) = -53 • 34 ′ 51 ′′ .Based on our choice of a radio error box of 1 ′′ search radius, the chance possibility that a 23rd magnitude galaxy (the most numerous in the above plot) should randomly coincide with a radio component is 0.6%.Making an additional correction for the fact that some of the galaxies are extended or saturated, the chance association of a galaxy with a radio component is still ≤ 1%.
Postage stamp cutouts for 18 ′′ ×18 ′′ regions around the sources with CTIO MOSAIC-II matches in Table 5 are shown in Figure 14, where the radio component is located at the centre of each box.
The full radio catalogue (Table 2) was then cross-correlated against the CTIO MOSAIC-II survey, resulting in 95 matches within a search radius of 1 ′′ .To test the false identification rate, arbitrary 60 ′′ offsets were again added to both the RA and Dec coordinates of the radio components, and the cross-match was repeated, resulting in only two galaxies as probably false identifications, which is roughly consistent with out estimate of the likely false detection rate discussed previously.We can therefore be con- fident to a high degree of the efficacy of our cross identifications.These are shown, along with the (probably false) detections arbitrarily shifting the radio pointing positions, in Figure 15.
The distribution of associated galaxies with magnitude is similar to that found in the CDFS field by Simpson et al. (2006) and Mainieri et al. (2008), with the majority of the number of detections rising from an R-magnitude ∼ 17.
Sub-millimetre cross matches
The ATCA radio catalogue was searched for matches with the ASTE/AzTEC 1.1 mm deep survey (Hatsukade et al. 2011) which contains the locations of 198 potential sub-millimetre galaxies over a ∼0.25 degree 2 area.We find one credible match that is consistent with the positional errors, lying 5.5 ′′ from ATCA component 120 with AzTEC J044435.35-534346.6.This has a de-boosted 1.1 mm flux of 2.8±0.5 mJy; a 20 cm radio flux 0.203 mJy; an R-magnitude of 21.4 from our CTIO imaging survey; and is at the highest redshift of 0.825 amongst the ATCA/AKARI detections detected in the AAT AAOmega redshift survey (Sedgwick et al. 2011).
We have also cross-correlated the ATCA catalogue with the BLAST South Ecliptic Pole catalogue (Valiante et al. 2010) finding five cross matched associations within 10 ′′ of a radio position (i.e.searching to one third of the BLAST beam width).These sources, and their 250 µm fluxes are ATCA component numbers 112, 125, 168, 316 and 409 with 250 µm fluxes of 205 mJy, 119 mJy, 177 mJy, 467 mJy and 130 mJy respectively.The first two of these are also listed in Table 5 as having AKARI cross-matches.We have also cross-correlated the ATCA radio sources with the Herschel-HerMES Public Data release catalogue, finding 41 cross-matches, the majority confirming our AKARI detections.
CONCLUSIONS
(i) A deep radio survey has been made of a ∼ 1.1 degree 2 area around the ATCA-ADF-S field using the ATCA telescope at 20 cm wavelength, and ∼ 2.5 degree 2 to lower sensitivity.The best sensitivity of the survey was 21 µJy beam −1 , achieved with a synthesised beam of 6.2 ′′ × 4.9 ′′ .The analysis methodology was carefully chosen to mitigate the various effects that can affect the efficacy of radio synthesis array observations, resulting in a final catalogue of 530 radio components, with the faintest integrated fluxes at about the 100 µJy level.The present catalogue of radio components will form the basis of a further paper reporting cross correlation against extant AKARI and deep optical imaging.Our derived sub-mJy number counts are consistent with, but lie at the lower end of the emerging picture for the excess in the radio counts below 1 mJy.Fitting an evolving galaxy model to our derived counts, we find a consistent picture of radio-loud dominated sources at bright fluxes and an emerging population of star-forming galaxies at radio flux levels <1 mJy.
(ii) Cross-correlating these with far-infrared sources from AKARI, archival optical photometry, Spitzer and BLAST data, we find 51 components lying within 1 ′′ of a radio position in at least one further catalogue.From optical identifications of a small segment of the radio image, we find 95 cross matches, with most galaxies having R-magnitudes in the range 18-24 magnitudes, similar to that found in other optical deep field identifications.The redshifts of these vary between the local universe and redshifts of up to 0.825.Associating with the Spitzer catalogue, we find 173 matches at 24 µm, within one Spitzer pixel, of which a small sample are clearly radio loud compared to the bulk of the galaxies.
(iii) The radio luminosity plot suggests that the majority of the radio sources with 90 µm counterparts are luminous star forming galaxies.This conclusion is supported by a comparison of the infrared colours of our matched sources which are well described by the colours expected from star-forming galaxies.
(iv) There is one cross match with an ASTE source, and five cross matches with BLAST submillimetre galaxies from the radio sources detected in the present this survey, two of which are also detected also detected at by AKARI at 90 µm, and 41 detections with Herschel, of which 12 had not previously been identified by AKARI.
ACKNOWLEDGEMENTS
This work is based on observations with AKARI, a JAXA project with the participation of ESA.We also express our thanks to The Australia Telescope Compact Array for the substantial allocation of observing time; to the staff of the Narrabri Observatory for technical support; and the UK Science and Technology Facilities Council, STFC for support.The UK-Japan AKARI Consortium has also received funding awards from the Sasakawa Foundation, The British Council, and the DAIWA Foundation, which facilitated travel and exchange activities, for which we are very grateful.This work was supported by KAKENHI (19540250 and 21111004).
Table 7.The complete source catalogue (the full version is available as Supplementary Material in the on line version of this article).The source parameters listed in the catalog are: (1) a short form running number (components that are believed to be parts of multi-component sources are listed with a † sign next to the running number (for example 47 † ), (2) the source name, referred to in this paper as ATCA-ADFS followed by the RA/Dec encoding (e.g.ATCA-ADFS J045243-533127), (3,4) the source Right Ascension and Declination (J2000) referenced from the self-calibrated reference frame, (5,6) the RA and Dec errors in arc seconds, (7,8) the peak flux density, S peak , and its associated rms error, (9,10) the integrated flux densities, S total and their associated errors, (11,12,13) the major and minor axes of the fitted Gaussian source profile and orientation (major and minor axes full width at half maximum, and position angle measured east of north.
No
Source
Figure 1 .
Figure 1.The central area of the ATCA 20 cm map, corrected for the primary beam of the antenna.The contours show the rms noise levels in µJy beam −1 estimated locally from the noise map by binning the data into 40×40 pixel regions.
Figure 2 .
Figure2.The horizontal axis shows the SFIND detection threshold as a function of areal coverage.The area used for the differential source count estimation in Section 5 is shown as a solid line, and has a maximum value of 1.04 degree 2 , whereas that of the full image (whose radio components are listed in Table2) is indicated by the dot-dash line and has a maximum value of 2.55 degree 2 .
Figure 4 .
Figure 4.This Figure shows the sum of the flux densities of the nearest neighbours between components in the detection catalogue.Following Magliocchetti et al. (1998) points to the left of the dashed line are possible double sources.The likelihood that two sources in a pair are related is further constrained (Magliocchetti et al. 1998) by requiring that the fluxes of the two components f 1 and f 2 should be in the range 0.25 ≤ f 1 / f 2 ≤ 4. Sources in the Figure whose components satisfying this additional criterion are shown as bold circles.
Figure 6 .
Figure 6.Variation of the smearing factor with distance from the centre of an individual field for a sample of components in common to many of the individual fields.The solid curve is the expected theoretical curve fromCondon et al. (1998).which matches closely to that shown byHuynh et al. (2005) in their ATCA observation of the Hubble Deep Field South, and the squares show the .
Figure 7 .
Figure 7. Ratio between the peak flux densities in the final mosaic and in the individual pointing where the source is within 5 ′ from the centre vs. the radial distance from the centre of the final mosaic.Only compact sources with flux densities greater than 1 mJy beam −1 are plotted.The fitted curve corresponds to a second order polynomial fit to the data represented by the relationship S(mosaic)/S(single field) = 0.82+5.64 10 −5 d centre + 1.93 10 −7 d centre 2 , whereS(mosaic)/S(single field) and d centre , the distance in arc seconds from the centre of the mosaiced field, correspond to the vertical and horizontal axes.
Figure 8 .
Figure 8. Differential counts determined from the AKARI ATCA-ADFS 20 cm deep field.The relationship for calculating the numbers in this plot and in Table4is the same as that used byKollgaard et al. (1994).
Figure 9 .
Figure 9.A compilation of the differential source counts of a number of deep 20 cm radio surveys taken from: SWIRE (Owen & Morrison 2008); COSMOS (Bondi et al. 2008); SSA13 (Fomalont et al. 2006); SXDF (Simpson et al. 2006); HDF-N, LOCKMAN and ELAIS N2 (Biggs & Ivison2006), and the HDF-S(Huynh et al. 2005).The solid curve is the best fit to the present data taken as described in Figure8.There are however differences in the instrumental and systematic corrections that have been made for the different survey results shown here (see detailed discussion byPrandoni et al. 2000 a,b), that make quantitative comparison at the faintest flux levels somewhat uncertain.
Figure 11 .
Figure 11.Radio components (shown as triangles) with matching Spitzer 24 µm sources within one Spitzer pixel.The Spitzer fluxes are the point response function fitted fluxes from Scott et al. (2010).The dots are matches between AKARI FIS 65 µm survey detections of bright radio sources taken from Dixon's Master radio catalogue (1970), for confirmed (i.e.quality flag 3) AKARI sources lying more than 10 degrees from the Galactic Plane as described in the caption of Figure 10.
Figure 12 .
Figure12.Radio luminosity as a function of the radio sources with measured spectroscopic redshifts listed in Table5.
Figure 13 .
Figure 13.Infrared colours of the sources from our ATCA survey with cross matches in both the AKARI 90 µm and Spitzer 24 & 70 µm bands from Table5.The black squares represent the catalogue sources, with the spectral tracks of an ensemble of star-forming galaxies with increasing far-infrared luminosity from L IR > 10 10 L ⊙ , 10 11 L ⊙ , 10 12 L ⊙ , and a spectral track from an AGN also plotted.The large crosses show the zero redshift points with the further smaller crosses corresponding to steps in redshift of 0.1.
Figure 14 .
Figure 14.Optically identified radio components from the CTIO MOSAIC-II images.The number on top of each image is the running number of the radio component listed in Table5.The scaling of each optical image has been adjusted to show the optical galaxy.
Figure 15 .
Figure 15.Cross correlation between the ATCA radio components and CTIO MOSAIC-II R-band galaxies within a 1 ′′ search radius.The equivalent counts after arbitrarily shifting the radio coordinates by +60 ′′ in both RA and Dec are shown as the solid filled bars at the bottom right of the Figure, as an indication of the likely false identification rate, which is clearly in line with that expected from the observed density of galaxies in the CTIO MOSAIC-II images.
Table 1 .
Summary of ancillary observations available for the ATCA-ADFS deep field
Table 2 .
The component catalogue (the full version is available as Supplementary Material in the on-line version of this article).The component parameters listed in the catalog are:
Table 3 .
Magliocchetti et al. (1998)catalogue for components satisfying theMagliocchetti et al. (1998)criterion.The proposed multi-component sources listed in this Table
Table 4 .
20 cm differential source counts for the ATCA-ADFS survey
Table 5 .
AKARI, Spitzer and Herschel associations with radio sources in the ATCA-ADFS survey.The Herschel (referenced as HSO in this Table)HerMES fluxes were extracted from the Herschel HerMES Public data release available from the LEDAM server at http://hedam.oamp.fr/HerMES/release.php, where we have used the band-merged StarFinder catalogues with the xID multi-band (250, 350 and 500 micron) fluxes measured at the positions of the StarFinder 250 micron sources.
on spiral galaxy with DENIS Blue and Red magnitudes of 14.6 and 14.1 respectively, and in the GALEX FUV and NUV bands with 22.39 and 21.03 mag respectively.Secondly, ATCA component 187 is a bright radio source previously detected in the SUMSS survey (SUMSS J044532-540211) with a radio flux of 1.22 mJy at MHz, suggesting that it may have brightened considerably (assuming a normal spectral index), and associated with an object having DENIS Blue and Red magnitudes of 18.4 and 17.7 mag respectively and a magnitude of 21.68 in the GALEX NUV band.
Table 6 .
Table continues from above: AKARI, Spitzer and Herschel associations with radio sources in the ATCA-ADFS survey.The Herschel fluxes were extracted from the Herschel HerMES Public data release available from the LEDAM server at http://hedam.oamp.fr/HerMES/release.php,where we have used the band-merged StarFinder catalogues with xID multi-band (250 µm, 350 µm and 500 µm) fluxes measured at the positions of the StarFinder 250 µm sources.
Table 8 .
Continuation of the source catalogue
Table 9 .
Continuation of the source catalogue
Table 10 .
Continuation of the source catalogue
Table 11 .
Continuation of the source catalogue
Table 12 .
Continuation of the source catalogue
|
2012-07-10T08:29:12.000Z
|
2012-07-10T00:00:00.000
|
{
"year": 2012,
"sha1": "d9e8054ba87f18ecab48332f7b6568c3e3634c9c",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/427/3/1830/3818913/427-3-1830.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "d9e8054ba87f18ecab48332f7b6568c3e3634c9c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235322174
|
pes2o/s2orc
|
v3-fos-license
|
Patients and Professionals as Partners in Hypertension Care: Qualitative Substudy of a Randomized Controlled Trial Using an Interactive Web-Based System Via Mobile Phone
Background The use of technology has the potential to support the patient´s active participation regarding treatment of hypertension. This might lead to changes in the roles of the patient and health care professional and affect the partnership between them. Objective The aim of this qualitative study was to explore the partnership between patients and health care professionals and the roles of patients and professionals in hypertension management when using an interactive web-based system for self-management of hypertension via the patient’s own mobile phone. Methods Focus group interviews were conducted with 22 patients and 15 professionals participating in a randomized controlled trial in Sweden aimed at lowering blood pressure (BP) using an interactive web-based system via mobile phones. The interviews were audiorecorded and transcribed and analyzed using thematic analysis. Results Three themes were identified: the technology, the patient, and the professional. The technology enabled documentation of BP treatment, mainly for sharing knowledge between the patient and the professional. The patients gained increased knowledge of BP values and their relation to daily activities and treatment. They were able to narrate about their BP treatment and take a greater responsibility, inspired by new insights and motivation for lifestyle changes. Based on the patient’s understanding of hypertension, professionals could use the system as an educational tool and some found new ways of communicating BP treatment with patients. Some reservations were raised about using the system, that it might be too time-consuming to function in clinical practice and that too much measuring could result in stress for the patient and an increased workload for the professionals. In addition, not all professionals and patients had adopted the instructions regarding the use of the system, resulting in less realization of its potential. Conclusions The use of the system led to the patients taking on a more active role in their BP treatment, becoming more of an expert of their BP. When using the system as intended, the professionals experienced it as a useful resource for communication regarding BP and lifestyle. Patients and professionals described a consultation on more equal grounds. The use of technology in hypertension management can promote a constructive and person-centered partnership between patient and professional. However, implementation of a new way of working should bring benefits and not be considered a burden for the professionals. To establish a successful partnership, both the patient and the professional need to be motivated toward a new way of working. Trial Registration ClinicalTrials.gov NCT03554382; https://clinicaltrials.gov/ct2/show/NCT03554382
Background
Medical advances and better living conditions have led to increasing lifespans and a growing population living with chronic conditions such as hypertension [1].With limited health care resources, new, more effective ways of managing chronic conditions need to be developed [2].Patients cannot be regarded as passive recipients of care but will need to be the main providers of care for themselves.With this, the role of health care professionals will also need to change from being the expert provider of care to being a cocreator of care with the patient [3,4].During 2020 and the COVID-19 pandemic, the need for this has become even more evident.Patients need to be able to perform effective self-management in their homes and not be dependent on visiting or using health care facilities [5].However, self-managing high blood pressure (BP) is something patients do every day by choosing what to eat, deciding whether to exercise, trying to decrease stress, and remembering to take their prescribed medication [3].Health care professionals have an important role to play in supporting patients to self-manage, ideally working in partnership with patients [6].
A European standard for a minimum level of patient involvement was recently established with the aim to support a wider implementation of person-centered care (PCC) [7].PCC is a health care approach where the patient's subjective perception of illness and their preferences and values are the starting point for the care process.Partnership between patient and professional, as well as patient narratives and shared documentation, are considered key concepts in PCC.Within the narrative and examination, the patient's need of care, prerequisites, resources, and obstacles are identified and documented together with the patient [8].Attributes defining partnership vary in different publications, but shared decision making, shared knowledge, communication, and shared power are commonly mentioned.The consequences are described as empowerment of the patient and improved health outcome and health care utilization [9][10][11].Patients appear to value other aspects of partnership than formal frames, appreciating proximity and receptive communication more than shared documentation and goal setting [12].Using technological tools in health care may strengthen the potential for patient self-management, and the understanding and practice of partnership between patient and professional might change as a result [13].
Objectives
Using an interactive information technology system requires interaction between patients and professionals, thus possibly affecting the patient-professional partnership.New roles for patients and professionals may be enabled.To date, there is limited research on how using technological tools in BP treatment affects the relationship between the patient and health care professional.
The objectives of this study were to explore the partnership between patients and health care professionals and further the roles of patient and professional when using an interactive web-based system for self-management of hypertension via the patient's own mobile phone.
Study Design
This study builds on a previously described interactive web-based communication system for self-management of hypertension called CQ (developed by Circadian Questions AB and referred to in this paper as "the system").The system has been described in earlier publications [14][15][16], and an overview can be seen in Figure 1.During the planning, execution, and evaluation of the components of the system in the pilot project, the participating patients and professionals were actively involved [14][15][16][17][18].The system was found to be relevant and easy to use [19], resulting in a significantly decreased BP for the participants (systolic BP -7 mm Hg and diastolic BP -4.9 mm Hg) [20].Furthermore, use of the system was considered a resource for PCC and a more autonomous, knowledgeable, and active patient [21,22].The system described in Figure 1 is now being tested in a randomized controlled trial (RCT; Person-Centeredness in Hypertension Management Using Information Technology [PERHIT]), including 900 patients with hypertension equally allocated to an intervention and a control group.The trial is conducted in primary care in 4 health care regions in southern Sweden.The aim of the trial is to lower BP in patients with hypertension in primary care.In addition, person-centeredness, patient self-reports such as daily life activities, and awareness of risk will be evaluated [23].
In short, the intervention consists of the following: • Start-up meeting was scheduled with a nurse or physician at the local primary health care center (PHCC) where instructions were given about how to use the system at home, including measuring BP daily.Questions regarding side effects were selected according to the patients' medication.Patients could choose to receive different relevant motivational messages on different days of the week.The messages were in the form of motivational questions and were intended to function as an inspiration for healthy choices (eg "Nice walk at lunch today?").Patients also received a manual of the system and were advised to watch videos on BP measurement and how to enter data via their mobile phones.
• During 8 consecutive weeks, patients used the system at home and reported BP, symptoms, medication intake, side effects, lifestyle, and well-being.After log-in, patients and professionals had access to visualization of self-reported data in graphs via a secure web portal.All data was saved in a secure database, not in the mobile phones.
• Follow-up consultation was scheduled with a nurse or physician at the local PHCC after finishing the 8-week intervention.Professionals were encouraged to discuss graphs with patients.An example of a system graph is presented in Figure 2.
• Follow-up consultation was scheduled for 12 months after trial began.Several interventions comprising mHealth (the use of mobile devices in health care) and hypertension have shown promising results [24][25][26].However, the evidence is scarce, and several research studies have called for large RCTs with mHealth interventions that involve more patients for a longer time period [27][28][29][30].
In this qualitative study, we conducted focus group interviews with patients and professionals participating in PERHIT.The Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist was used to ensure rigor in reporting the study design and conduction [31].
Recruitment and Participants
Four PHCCs participating in PERHIT in different geographical and socioeconomic areas were strategically selected to reflect a broad socioeconomic area.Two of the PHCCs were located in midsize cities, one in a larger city suburb, and one in a smaller city.Patients and professionals were contacted by the staff at the PHCC to take part in focus group interviews.
At the time of the interviews, all patients had completed their 8-week intervention and attended the follow-up consultation with their nurse or physician.The time elapsed from the completion of the intervention to the interview varied between the patients from 1 week to 3 months (median 31 days).The inclusion criteria for the patients were the same as for the PERHIT study: aged older than 18 years, diagnosis of hypertension, treatment with at least one antihypertensive drug, and understanding of Swedish in order to be able to provide informed consent and make use of the system using the mobile phone for answering questions [23].The inclusion criteria for the professionals were being a nurse or physician at the PHCC and having experience working with the PERHIT study.
Since only 2 to 4 professionals were involved in the study at each site, other professionals in the PERHIT study from nearby sites were also approached and asked to participate in the same interview.In total, professionals from 8 different PHCCs contributed to the study.
Data Generation
Prior to the focus group interviews, 2 semistructured interview guides were developed by the research team, one for the patient groups and one for the health care professional groups.A test interview with mock patients was conducted prior to the first interview to evaluate the questions, resulting in some changes to the interview guide.After the first focus group interview with patients, it was obvious that a few questions needed to be rephrased.These were minor changes, and the material from the interview was still considered useful.No changes to the interview guides were made after that.Interview topics are presented in Textbox 1. Interviews began after introductions, small talk, and reiteration of the research goal [32].Using the system and technology: • Experiences of using the technology and how it was used during the 8-week intervention
Perceptions of motivational messages
• How/if using the system has affected everyday life (patients) • How/if using the system has affected working methods in blood pressure treatment (professionals) • Experiences of using other technical systems for chronic disease in health care Focus group interviews were held at the PHCCs from June 2019 to January 2020.A total of 22 patients participated in 4 focus group interviews, with 4 to 7 patients in each.Three focus group interviews, with 4 to 6 professionals each (n=15 total), were also conducted.No compensation was offered to the participants except for coffee and fruit during the interview.The duration of focus group interviews varied from 64 to 97 minutes and were held in Swedish.UA (first author) was the moderator of the focus group interviews.UB (second author), who is experienced in qualitative research, assisted and took notes.
Prior to the focus group interview, UA had been in contact with the patients and professionals by telephone or email to set a date and time for the interview.No other relationship prior to the interview was established.At the interviews, researchers presented themselves with their occupation and as members of the research group conducting the RCT.Only the participants and researchers were present at the interviews.
Data Analysis
The interviews were audiorecorded and transcribed verbatim.They were also videorecorded, with the purpose to serve as an aid for memory during the analysis phase.Thematic analysis according to Braun and Clarke [33] was used on the dataset, since it is a flexible method when performing qualitative analysis, allowing for both an inductive and deductive approach to the data [34].
The recordings of the interviews were listened through several times and the anonymized transcripts were checked against the recordings for errors by the first author (UA).During this phase, initial thoughts and ideas were noted.UA, UB, and KK (last author) read the transcripts repeatedly.UA created initial codes by systematically going through all the interview transcripts without a predefined coding frame.Interviews with patients and professionals were coded simultaneously using NVivo software (version 12, QSR International).The initial codes were compared and organized into common categories, which were discussed by UA, UB, and KK.Since we were interested in a specific aspect of the participants' experiences-how using the system affected the experience of partnership between patients and professionals-we then used a deductive approach, inspired by previous research concerning the concept of partnership [8][9][10][11] and partnership and technology [13,35].The initial codes were reviewed and arranged in preliminary themes and subthemes, focused on aspects of partnership.A narrative description of the preliminary themes and a thematic map were created and discussed by UA and KK.Themes were reviewed and checked against the datasets.In the process of defining and naming the themes, UA, KK, and AR collaborated and discussed until consensus was reached.A detailed description of each theme was developed, and informative names for the themes were established.To further visualize the themes, descriptive excerpts were identified.
Ethical Considerations
The study was approved by the regional ethical review board in Lund (2017/311 and 2019/00036).Participants were given oral and written information about the study before they signed a consent form.All transcripts were anonymized to ensure confidentiality.The study was registered with ClinicalTrials.gov[NCT03554382].
Study Sample
Characteristics of participating patients and professionals are presented in Tables 1 and 2. In the analysis of the focus group interviews, 3 actors were identified: the patient, the professional, and the technology.The roles of the different actors are described in 3 themes: using technology as an aid for self-management and treatment of high BP, professional as a consultant, and patient as active and responsible partner.An overview of the themes and subthemes is presented in a thematic map in Figure 3.
Technology as a Tool for Documentation of Self-Reports and Appropriate Drug Treatment
The professionals considered the different components of the system to be helpful tools in the treatment of high BP.The documentation of the self-reports via the graphs made it possible to communicate more easily about the treatment.If a new drug was prescribed, the patient could follow the effect from day to day, thus becoming more aware of the BP treatment.It was considered a benefit that the patients monitored their BP at home instead of coming to the PHCC.During the intervention, some patients had contacted their nurse or physician when their BP was high, thus acting on high BP values.
Professionals viewed selected patients' graphs during the 8-week intervention if the patient encountered problems adjusting the BP.If the BP was still too high, they contacted the patient and could adjust the drug treatment without the patient having to come to the PHCC.They believed this was educational for them as professionals as well, leading to increased understanding of the variation of BP.
And you could go in and see...see when they were running high, if something had happened, so to say, that day.If they were stressed or...if something...and if you saw that they were still running too high, so to say, you called and talked to them and said we need to adjust your medicine.[Health Care Professional 2 (HCP2)] For some of the patients, using the system brought a closer and more frequent contact with their prescribing physician.If they had altered their BP medication at the start of or during the intervention, they could with daily measurement report its effect on the BP.
But even with close, sort of, contact with M [the patient's physician] which is...it's been short
telephone calls where you can...yeah, but he's asked "How are you?"etc. Yeah, this is how it's going now, and then we've been able to change it quickly.[Patient 13 (P13)] The professionals' opinions about how feasible it would be to use the system as an integrated part of BP management differed.Not all were positive.Some experienced that it was too time-consuming and did not provide enough benefits to make it worthwhile.
Graphs as an Educational Tool for Understanding BP Values and Relation to Daily Life, Activity, and Treatment
During the follow-up consultation, graphs were used as an educational tool.Through the graphs, the patient could become aware of the normal variability of BP.They could also connect BP variations to physical activity, stress, or medication intake, for example, creating awareness of lifestyle and medication effect on BP.The patients contributed with their explanation of BP variation in relation to their daily activities.
But I myself had...in my head I kind of had the idea that now I want to see these graphs for these particular days and what I knew that I...had reported high then and also made a note of it, so to say, and it matched well.Yeah, I thought there was good correlation between these..
. [P22]
Not all professionals viewed the graphs with the patients during the follow-up consultations, thus not using the system as intended.In those cases, the professionals expressed that the patients were passive during the follow-up consultation.The BP and lifestyle were not discussed, and instead the patients waited for the professional to introduce the next step in the study procedure.In these cases, the professionals had not adopted the instructions given by the research team regarding the intended use of the system.
Motivational Messages Yielding Irritation, Indifference, or Inspiration
The optional motivational messages included in the system, in the form of motivational questions, were meant to function as an inspiration for healthy choices.Opinions among the patients about the messages differed.Some patients perceived them as irritating since it was not possible to submit an answer.These patients had not been informed (and had not read the manual) about the intention with the messages to function as small reminders not requiring an answer.They thought a positive answer to the questions would generate further information.Others simply ignored the messages, since they disappeared in all the other incoming information in their mobile phones, such as text messages, emails and alerts.Other patients perceived the motivational messages as something positive, finding them an inspiration for healthy choices or considered them a small sign that someone cared about them.
Becoming More Involved and Active
After using the system, patients were considered by the professionals to be more active in the consultation.They asked questions and wanted to discuss their BP values in relation to the documentation of their daily activities.The professional did not have to lead the conversation as they usually did.
Yeah, they were very serious; they had direct questions then, oh yeah, I saw that that day looked like this, what do you say about this, sort of...I didn't have to ask that much; they had their questions for me. [HCP13]
The patients considered themselves as more involved during the follow-up consultation, since they contributed with their knowledge about how they had felt and their health status.They considered themselves more prepared for the consultations and had thought about questions and what they wanted to discuss with the professional.They also believed this was recognized and confirmed by the professional, who was considered to be interested and attentive.
Connecting BP Values to Activity and Treatment
Using the system made the patients more aware of how their choices affected their BP and their health, and they reflected upon their days.Being able to measure the BP frequently gave insight into how different BP levels corresponded to daily activities.
But, you know, I've noticed right away when I've made that change there, I mean with the exercise and then also training with my dumbbells at home and stuff, that it's had an effect; it has, you know. [P10]
Not all patients logged in to the website and viewed their reported values in graphs during the intervention.Reasons for not logging in were that they were not aware of the possibility, they were not interested, or they chose to wait until the follow-up consultation.The patients who did view the graphs by themselves thought they were valuable and used them to relate activities or well-being to their BP values.Some patients who were not aware of the possibility to log in to the website kept notes by themselves, writing down the BP and what they had done and in some instances sharing their notes with their nurse or physician.Even if they did not view the graphs, they connected their daily BP value to how they felt or what they had done during the day.
Self-Monitoring Resulted in Increased Insight
By monitoring the BP and relating it to daily life, the patient became the expert on his or her BP.Some patients related that they got to know themselves and their bodies better.By daily monitoring, they could anticipate the BP value when measuring it in the evening.For example, after a stressful day, they expected a high BP value.They became aware of what affected the BP and what they could do about it.Their own responsibility for a successful treatment became clear to them.
Taking Responsibility for BP and Lifestyle
The patients regarded it as their responsibility to contact their physician or nurse when their BP was uncontrolled.They also considered it their responsibility to keep track of their BP and appreciated being able to check their BP at home.
So if I felt a little uncomfortable in my body and I went and checked my blood pressure and it was a total disaster, yeah, then I could sound the alarm earlier than if I hadn't had a gauge. So in that way I feel safer now, I think. [P10]
The professionals related that they saw an increased interest in self-monitoring of BP, even outside the settings of a study.They had noticed that some of their patients had bought BP monitors and used them at home.This was mostly considered positive, although the professionals also thought some patients measured and monitored excessively, which could lead to an increased workload for them and stress for the patient.
The patients considered diet and physical activity important regarding BP treatment.Participation in the study was a motivator for lifestyle changes such as increasing physical activity.They believed they had knowledge about the positive effects of exercise and a healthy diet prior to the study but had not taken it to heart before.Seeing the BP values every day became a reminder and encouragement to do something about the situation.The need to do well and be normal, to have a BP within the target values, was also a motivator for healthier choices.Some patients changed their dietary habits, cutting back on sweets, salt, and licorice.Others had thought about changing their habits but had not yet started.
Focusing on Aspects of BP and Lifestyle That Mattered to the Patient
The professionals believed using the system contributed to more lifestyle-oriented conversations with the patients.Instead of only focusing on the effect of BP-lowering drugs, they talked about other aspects of high BP, such as how the patient's lifestyle affected BP.The professionals considered the conversation to be more focused on the individual patient's needs and resources than usual BP consultations.The patients said that they could discuss things that were important to them, that either they themselves or the professionals brought up.Some of the professionals expressed that they became more of a consultant for the patient than a lecturing nurse or physician.When the patients were more active during the follow-up consultations, possible lifestyle changes, which were significant for them, came to light and the discussion could focus on that on their terms.
And then maybe something turns up...one thing we can help with and work with, but then maybe we can calm down a little with the rest of...because it's this that the patient's a little interested in or feels like I have to...this...I can make a change here, and then we can help with that.Yeah, it was...it made it easier...to have that kind of discussion, I think. [HCP3]
Personalizing the Consultation
By introducing the system to the patients and looking at the graphs together, the professionals related that they found out more about the patients and learned something new from them.One professional was surprised about how much the knowledge about BP differed between patients; some did not know about the risk of elevated BP or their own target BP.When this came to light, the discussion could be held in a more personalized way.
While using the graph as a visual tool, some of the professionals related that they learned new ways of talking about BP and lifestyle.Despite years of experience of talking to patients about BP, this consultation was considered more rewarding as it was more personal and relevant.
In some way I learned to teach people about blood pressure, which I actually hadn't done before.I've seen so many blood pressure patients, but haven't ever had the time to get into this particular person's condition, kind of.[HCP13] Thus, not only could the patients gain new knowledge by using the system, the professionals could deepen their understanding of hypertension management.
Partnership
As shown above, the system contributed to several attributes of partnership (see the code list from NVivo in Multimedia Appendix 1).Patients contributed with their knowledge about their health status and situations while professionals contributed with expert knowledge on BP, thus sharing knowledge.The professionals expressed that they learned new things using this working method.Both patients and professionals declared that the consultation was more equal as the patient was more prepared and knowledgeable, thus indicating shared power and shared collaborative decision making.
Principal Findings
This study aimed to explore the partnership and roles of patients and professionals in hypertension management when using an interactive web-based system for self-management of hypertension.Focus group interviews with patients and professionals were conducted and analyzed using thematic analysis.
Three themes, on the technology, the patient, and the health care professional, are evident when using the interactive web-based system via mobile phone.The described themes represent one actor each.The system (the technology) is mainly a tool for documentation and sharing knowledge between patient and professional, thus affecting the partnership and how BP is communicated.By using the system, patients gained insight into how BP was affected by their lifestyle and became motivated to make healthier choices.As experts of their BP, they came well prepared to the follow-up consultation and were then able to take on a more active role.The professionals took a more secondary role during the follow-up consultation, controlling the conversation to a lesser extent.They were no longer the only holders of data and knowledge but instead became consultants and support to the patients, contributing with expert knowledge adjusted for the patients' needs.Both patients and professionals described a consultation on more equal terms than usual, thus creating a base for a successful partnership.This was the case described by most of the participating professionals and patients but not shared by all.
Comparison With Prior Work
Previous research has found that self-monitoring of BP enables activation of patients and motivates them to engage in lifestyle changes, favoring self-management [6,36].By self-monitoring, the patient can provide the data that was previously produced by the health care professional at the clinical encounter.According to Shahaj et al [6], this might potentially challenge the dynamics between patient and professional, which is in line with the findings in this paper.Most of the patients in this study considered it their responsibility to check their BP regularly and adhere to the prescribed treatment.This was mainly viewed as something positive, but it could have potentially negative effects.If the patient using the system is not able to take on the responsibility, for example, not being able to interpret the BP values and acting on high values, the use of the system could be a burden.The system is intended to be used as a complement to the physical meeting and examination in usual care, and thus a patient not being able to use or interpret the system should not receive inferior care compared to treatment as normal.On the other hand, if the patient is able to take on the responsibility and self-manage effectively, the need for physical check-ups is diminished and contact with the health care professional can be managed over the phone or digitally in an effective way.
Wildevuur et al [13] studied how the partnership between patients and professionals is affected by the use of information and communication technology.They found that using information and communication technology in disease management requires an adjustment of the partnership through strengthened potential for self-management and shared analyzing of data.The health care system can be reorganized with new care pathways, where the data provided by the patient can serve as an initiative for treatment.Ultimately, it is the patients' trust in technology and ability to self-manage that shapes the partnership with the professionals, provided that the professionals can adapt to the different needs of different patients.
In our interviews, opinions on using the system differed among professionals.Most of them found the system to be a helpful tool regarding hypertension management, inspiring new ways to talk about hypertension and working with the patient as an equal partner.Others were apprehensive about using it in clinical work since they found it too time-consuming.The professionals' views about the role of technological tools in clinical work also differed; some did not believe it would bring any positive effects while others considered it an inevitable and possibly favorable part of their future working methods.A precondition for technology to enable effective PCC is, according to Wildevuur et al [35], that the technical solution is efficient for both patients and professionals and reduces the pressure on health care systems.In our study, the intervention technology is not integrated in the established health care technology, thus requiring the professionals to work in parallel systems, and this might cause problems.During the interviews, we found that the system was not used as intended in some instances despite a thorough introduction and a user manual.Some of the patients were not aware of the possibility of logging in to the web portal and viewing their reported values in graphs.This opportunity for visual feedback and insights of connections between BP and reported factors was therefore lost.Some of the professionals had not viewed the graphs together with the patients at the follow-up consultation after the 8-week intervention, thus disregarding a large part of the potential use of the system and an important kick-off for lifestyle changes during the rest of the 12 months.A lesson learned is that when introducing a new technical system, the professionals' opinions and preferences about technology need to be acknowledged and considered.The professionals need to receive sufficient education on how to make use of the system in an optimal way and correctly instruct the patients on how to use it and what the benefits are for the patients in doing so.Implementation of a new way of working should bring benefits and not be considered a burden for the professionals.To establish a successful partnership, both the patient and professional need to be motivated about the new way of working.
The optional motivational messages in our study were received with mixed emotions by the patients.Previous interventions, which focused on text message-based lifestyle advice with the aim to lower BP, had shown small or insignificant positive results, indicating that motivational messages might be a part of a successful lifestyle intervention but are not sufficient on their own [37,38].The irritation some of the patients described about the messages could be attributed to a lack of knowledge of the intention with the messages, highlighting the need for sufficient education of the health care professionals when conducting a study like this.That some patients were aware of the intention of the messages but chose to ignore them indicates that for a lifestyle intervention to be successful, some response or action is necessary.Otherwise, the message will disappear in the amount of information received daily.
Previous research has shown that follow-up consultations regarding hypertension are usually dominated by the professional and mainly focused on effect of drug treatment on BP [39].As a contrast to this, during the follow-up consultations in this study, the focus was more on lifestyle and its relation to BP.The visualization of BP and lifestyle in graphs was considered valuable and contributed to the change of focus.The patients described that with insight gained by using the system at home and during the consultation came motivation to make lifestyle changes.This can lead to improved physical and psychosocial well-being beyond the effect on BP levels.
When used as intended, the system was found to be a resource for a person-centered approach in hypertension management.After implementing the system for 8 weeks, the patients could express their views and experiences of high BP.Both patients and professionals could contribute with knowledge during the follow-up consultation.The graphs could serve as documentation, shared by the patient and the professional.
To further analyze the potential benefit of using this system, future studies could focus on testing in other clinical or cultural settings such as hospital clinics with outpatient care or in other countries with a more diverse population.As reported by Samkange-Zeeb et al [40], it is important to consider migration background and language competency to make information and services via the internet accessible in diverse groups.
Strengths and Limitations of the Study
A strength of this study is that it builds on previous work and confirms results found in the pilot project regarding the potential of the system to support patient self-management.By including experiences from both patients and professionals in the study, a more comprehensive dataset was obtained.Both perspectives must be identified before implementation in clinical practice.
This study has some limitations.During recruitment for the focus group interviews, we aimed to put together groups with XSL • FO RenderX a diversity of men and women from different socioeconomic and cultural backgrounds and in different age groups.We therefore approached PHCCs in different socioeconomic areas.The number of eligible patients, with the diversity above, per PHCC was limited and not all proposed patients agreed to participate.We therefore had to make some compromises in selection of patients.One included health care center in a multicultural area unfortunately had to be excluded from the trial due to language problems and following methodological errors.Also, an inclusion criterion was to understand Swedish.Consequently, we did not achieve a diversity in terms of ethnicity and cultural background.The participants were comparable with the Swedish hypertension population in terms of age, but the sex distribution differed, with a majority of men in our study [41].As always in studies like this, there is a risk for selection bias in the recruitment of patients.The patients who are already aware of their health status and motivated to treat their condition may have chosen to participate in the trial to a higher degree.
Conclusion
Using technology for strengthening patients' potential for self-management has the possibility to change the relationship between patients and professionals.The patients perceived themselves as more active and motivated in their BP treatment.When using the system as intended, the professionals experienced it as a resource for communication regarding BP and lifestyle.Both patients and professionals described a consultation on more equal grounds, laying the foundation for a constructive partnership.
To realize the potential in a system like this, health care professionals need to be motivated and interested in new approaches in management of chronic conditions.Integration of the technology in the existing technical system is essential.Health care professionals also need to receive a thorough introduction so that they, in turn, can properly instruct and motivate patients to use the system and read the manual.If this is not achievable, introduction of a new technical solution may instead increase workload and become a burden in chronic condition management both for professionals and patients.
Figure 1 .
Figure 1.Overview of the interactive web-based communication system: (a) blood pressure device; (b) self-reports, reminders, and optional motivational messages via patient's own mobile phone; (c) database where self-reports are saved; and (d) secure web portal available to patients and professionals for data visualization.
Figure 2 .
Figure 2. Graph showing correlation of physical activity with blood pressure, as shown to participants.
Figure 3 .
Figure 3. Overview of the themes and subthemes.
Textbox 1 .
Interview topic and subtopic list.
a Years with hypertension: missing 1 data point.bEducation level: missing 2 data points.
Age intervals (years), n (%)
, before I started this study I knew that exercise was the best, but even so...you heard it every time you came down and took your blood pressure and stuff, but yeah...you know, not much happens.
|
2021-06-04T06:16:23.057Z
|
2020-11-30T00:00:00.000
|
{
"year": 2021,
"sha1": "a90a152c7f5d3503ae1eec5002ed21e1d751c27e",
"oa_license": "CCBY",
"oa_url": "https://www.jmir.org/2021/6/e26143/PDF",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e1534037306afc73894fe5ec7dc3156207b2ffd5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14396491
|
pes2o/s2orc
|
v3-fos-license
|
Ionic Gelation Controlled Drug Delivery Systems for Gastric-Mucoadhesive Microcapsules of Captopril
A new oral drug delivery system was developed utilizing both the concepts of controlled release and mucoadhesiveness, in order to obtain a unique drug delivery system which could remain in stomach and control the drug release for longer period of time. Captopril microcapsules were prepared with a coat consisting of alginate and a mucoadhesive polymer such as hydroxy propyl methyl cellulose, carbopol 934p, chitosan and cellulose acetate phthalate using emulsifi cation ionic gelation process. The resulting microcapsules were discrete, large, spherical and free fl owing. Microencapsulation effi ciency was 41.7-89.7% and high percentage effi ciency was observed with (9:1) alginatechitosan microcapsules. All alginate-carbopol 934p microcapsules exhibited good mucoadhesive property in the in vitro wash off test. Drug release pattern for all formulation in 0.1 N HCl (pH 1.2) was diffusion controlled, gradually over 8 h and followed zero order kinetics.
Microencapsulation by various polymers and their applications are described in standard textbook 1 .Microencapsulation and the resulting microcapsules have gained good acceptance as a process to achieve controlled release and drug targeting.Mucoadhesive is a topic of current interest in the design of drug delivery systems to prolong the residence time of the dosage form at the site of application or absorption and to facilitate intimate contact of the dosage form with the underlying absorption surface to improve and enhance the bioavailability of drug 2 .This study describes the formulation and evaluation of gastric-mucoadhesive microcapsules of captopril employing various mucoadhesive polymers designed for oral controlled release.Captopril is an angiotensin converting enzyme inhibitor used in the treatment of hypertension and congestive cardiac failure, which requires controlled release owing to its short biological half-life of 3 h and the drug, is unstable in the alkaline pH of the intestine, where as stable in acidic pH and specifically absorbed from the stomach 3,4 .
Based on the above reasons there is a clear need to localize the developed formulation at the target area of GIT.Microcapsules containing captopril were prepared employing sodium alginate in combination with four mucoadhesive polymers like hydroxypropylmethylcellulose, carbopol 934p, cellulose acetate phthalate and chitosan.Emulsification-ionic gelation process was used to prepare the microcapsules 4 .
Core coating material (sodium alginate) and the mucoadhesive polymers were dissolved in distilled water (32 ml) to form a homogeneous polymer solution.Core material (captopril 1 g) was added to the polymer solution and mixed thoroughly to form a smooth viscous dispersion.The resulting dispersion was then added in a thin stream to a 300 ml of arachis oil contained in a 500 ml beaker with stirring at 400 rpm using a mechanical stirrer.The stirring was continued for 5 min to emulsify the added dispersion as fine droplets.Calcium chloride (10% w/v) solution (40 ml) was then added slowly while stirring for ionic gelation (or curing) reaction.Stirring was continued for 15 min to complete the curing reaction and to produce spherical microcapsules.Mixture was then centrifuged and the product thus separated was washed repeatedly with water and dried at 45 0 for 12 h.Based on the above procedure various formulations were developed as given in Table 1 Drug content estimation was done by a reported method by El-Kamel et al. 5 , 20 mg of the microcapsules were stirred in 3 ml of sodium citrate solution (1% w/v) until complete dissolution occurs.
One milliliter of methanol was added to sodium citrate solution to gel the solubilized calcium alginate and further solubilise captopril.This solution was then fi ltered to obtain drug solution.The fi ltrate is suitably diluted with 0.1N HCl and absorbance was taken at 212 nm.
Microencapsulating effi ciency was calculated using the formula; microencapsulation effi ciency is equal to the ratio of estimated percentage of drug content by theoretical percentage of drug content into 100.Dissolution studies were performed for microcapsules containing quantity equivalent to 100 mg of drug fi lled in capsules by using USP 23 TDT-06T (Electro lab-paddle method) at 50 rpm.The media used were 900 ml of 0.1 N HCl (pH 1.2), maintained at 37±0.5 o , 5 ml of samples were withdrawn at different time intervals and replace with 5 ml of dissolution medium.The samples were filtered and assayed spectrophotometrically at 212 nm [6][7][8] , after appropriate dilutions.Dissolution testing was also performed for 100 mg pure drug.
The mucoadhesive property of the microcapsules was evaluated by an in vitro adhesion testing method, known as wash off method.The mucoadhesive property of microcapsules was compared with that of a non-adhesive material, ethylene vinyl acetate microcapsules.A piece of sheep stomach mucosa (22 cm) was mounted onto glass slide (31 inch) with cyanoacrylate glue.One more glass slides were connected with a support.Fifty microcapsules were counted and spread over the wet rinsed tissue specimen and immediately there after the support was hung on the arm of a USP tablet disintegrating test machine 9,10 .By operating the disintegration machine the tissue specimen was given a slow regular up and down moment.The slides move up and down in the test fluid at 37±0.5 o .The number of microcapsules adhering to the tissue was counted at 2 h intervals up to 8 h.The test was performed in acidic media (0.1N HCl pH 1.2).
Microcapsules of captopril with a coat consisting of alginate and a mucoadhesive polymer (1:1) and (9:1) ratio namely hydroxy propyl methyl cellulose, carbopol 934p, chitosan and cellulose acetate phthalate could be prepared by emulsification ionic gelation process.The prepared microcapsules were found to be discrete, large, spherical and free fl owing.The size of all the formulated microcapsules was found to be in the range of 40.5 μm to 65 μm.
Based on the amount of drug loaded in microcapsules, microencapsulation effi ciency was calculated.There was no significant difference observed in drug loading.As the proportion of alginate was raised from 1 to 9, there was significant increase in the microencapsulation effi ciency of microcapsules.
In vitro wash off test was performed to understand the mucoadhesive property of microcapsules.Here the time of detachment of microcapsules from sheep stomach mucosa was measured in 0.1N HCl (pH 1.2).All the formulated microcapsules demonstrated good mucoadhesive property compare to non-mucoadhesive polymer (ethylene vinyl acetate).The following stages have occurred during mucoadhesion.Initially, an intimate contact between the mucus gel and the swelling of mucoadhesive polymer that is (wetting), which makes the polymer strands to relax this is followed by the penetration of the mucoadhesive polymer into the mucus gel network and fi nally the formation of secondary chemical bonds between the mucus and the mucoadhesive polymer 11,12 .
In vitro release studies were carried out in 0.1 N HCl (pH 1.2) which indicated that there was a slow and controlled release of drug for all the formulations (Figs. 1 and 2).Alginate-cellulose acetate phthalate microcapsules demonstrated sustained release compared to all other alginate polymer combinations.The order of drug release was found to be zero order for all the formulations.Drug release data was better fit to Higuchi's diffusion model and the release of drug from all the formulations is diffusion rate limited 13 .The order of increasing release rate observed with microcapsules was alginatecellulose acetate phthalate microcapsules<alginatechitosan microcapsules<alginate-carbopol 934p microcapsules<alginate-hydroxypropylmethylcellulose microcapsules.
Thus, large sized spherical microcapsules with a coat consisting of alginate and a mucoadhesive polymer (hydroxypropylmethylcellulose, Carbopol 934p, Chitosan and cellulose acetate phthalate) could be prepared by emulsifi cation ionic gelation process.The microcapsules exhibited good mucoadhesive property in vitro-wash off tests.Captopril releases from these mucoadhesive microcapsules slow and extended over longer period of time.Drug release was diffusion controlled and followed zero order kinetics.Alginate-Carbopol 934p microcapsules and alginate-chitosan microcapsules were found suitable for oral controlled release.Literature survey reveals that for itopride hydrochloride HPLC [3][4] methods have been reported.Rabeprazole sodium, chemically 2-[[[4-(3-methoxypropoxy)-3-methyl-2-pyridinyl]methyl]sulfi nyl]-1H-benzimidazole sodium is the latest proton pump inhibitor and is used in the management of acid related disorders 5 .Few analytical methods for estimation of rabeprazole sodium from biological fl uid including HPLC [6][7] , LC-MS 8 , LC-NMR 9 , column switching LC 10 and spectrophotometric [11][12] are reported.However no spectrophotometric or HPLC method is yet reported for simultaneous analysis of two drugs from combined pharmaceutical dosage form.
A Systronics UV/Vis double beam spectrophotometer (model 2101) with 1 cm matched quartz cells was used for spectrophotometric analysis.Spectra were recorded using specifi c program of instrument, having specifi cations as, spectral band width 2 nm, wavelength accuracy + 0.5 nm, wavelength readability 0.1 nm increment.For HPLC method Shimadzu LC-10AT with SPD-10A detector was used.Different batches of the capsule samples of combined dosage form of itopride hydrochloride and rabeprazole sodium [Itza RB (Cadila Pharmaceutical Pvt.Ltd, Ahmadabad)] were procured from the local market.
In the first method, pure drug sample of itopride hydrochloride and rabeprazole sodium were dissolved separately in distilled water so as to give eight dilutions of standard in concentration range of 5-40 g/ml of itopride hydrochloride and 2 -30 g/ml of rabeprazole sodium.All solutions were scanned in wavelength
TABLE 1 : COMPOSITION AND CHARACTERISTICS OF CAPTOPRIL GASTRIC-MUCOADHESIVE MICROCAPSULES Composition in g Formulation code and ratio of Polymers used
The value indicates Mean±SD.Standard deviation is between n = 8 ++ good adhesion, + fair adhesion.The value indicates Mean±SD.Standard deviation is between n = 8 ++ good stability.
|
2017-04-19T14:52:49.009Z
|
2008-09-01T00:00:00.000
|
{
"year": 2008,
"sha1": "ee029152bb1ed57d5f6b99000b0da12bb0da6a3f",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3038296",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ee029152bb1ed57d5f6b99000b0da12bb0da6a3f",
"s2fieldsofstudy": [
"Engineering",
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
231898976
|
pes2o/s2orc
|
v3-fos-license
|
How Are We Managing Patients with Hyperuricemia and Gout: A Cross Sectional Study Assessing Knowledge and Attitudes of Primary Care Physicians?
Background: Studies show that hyperuricemia is an element of the pathophysiology of many conditions. Therefore, the aim of this study was to assess primary care physicians’ knowledge and attitudes toward asymptomatic hyperuricemia and gout management. Methods: A survey-based cross-sectional study was conducted to assess the primary physicians’ attitudes, knowledge, and patient management regarding hyperuricemia and gout. Results: A total of 336 primary care physicians were included. Physicians who read at least one scientific paper covering the topic of hyperuricemia in the past year scored significantly higher in knowledge questions (N = 152, 6.5 ± 2.05 vs. N = 183, 7.04 ± 2.14, p = 0.019). Only around half of physicians correctly identified drugs that can lower or elevate serum uric acid levels. Furthermore, the analysis of correct answers to specific questions showed poor understanding of the pathophysiology of hyperuricemia and possible risk factors. Conclusions: This study identified gaps in primary care physicians’ knowledge essential for the adequate management of patients with asymptomatic hyperuricemia and gout. As hyperuricemia and gout are among the fastest rising non-communicable diseases, greater awareness of the available guidelines and more education about the causes and risks of hyperuricemia among primary care physicians may reduce the development of diseases that have hyperuricemia as risk factors.
Introduction
The final product of purine and protein metabolism in humans is uric acid. Hyperuricemia is defined as an excess of serum uric acid which may lead to the precipitation of uric acid crystals [1,2]. In around 90% of persons with hyperuricemia, there is an insufficient excretion of urate in the kidneys indicating a genetic predisposition [1]. Moreover, serum uric acid levels show a strong heritable component [2]. Other causes include increased endogenous purine production and the consumption of high-purine diets [1].
The prevalence of hyperuricemia is around 20% for both men and women and is increasing, especially in developing countries with a Western lifestyle [1,[3][4][5]. A recent study in Poland conducted in persons of 65 years of age and older found that hyperuricemia was present in in 28.2% of women and 24.7% of men [6]. The prevalence of hyperuricemia in Croatia ranges from to roughly from 8.3% to 10.7% (15.4% male, 7.8% female) according to published research [7,8]. Furthermore, research shows a greater risk of hyperuricemia among men compared to women, but for women the risk rises after menopause [2][3][4].
The exact pathophysiology of hyperuricemia is still unclear and is being investigated. However, research indicates that hyperuricemia is closely related to multiple risk factors. Hyperuricemia itself is a risk factor for the development of gout and is the main factor leading to systemic inflammation in gout. Moreover, inflammation in patients with asymptomatic hyperuricemia increases risks for the development of cardiovascular diseases, diabetes, kidney disease and metabolic conditions [1,9].
Studies show that hyperuricemia is an element of the pathophysiology of many conditions. As this list continues to grow, it stresses the need for understanding how well uric acid homeostasis is regulated [2]. It is essential that primary care physicians are familiar with the management of both asymptomatic hyperuricemia and gout, as asymptomatic hyperuricemia poses an increased risk for the development of metabolic and cardiovascular diseases, and gout impairs the quality of life. Cardiovascular diseases are among the leading causes of death in developed countries [10,11]. Moreover, both cardiovascular and metabolic diseases place great strain on healthcare systems [12][13][14]. Therefore, the aim of this study was to assess primary care physicians' knowledge and attitudes about asymptomatic hyperuricemia and gout management.
Participants
A survey-based cross-sectional study was conducted to assess primary physicians' attitudes, knowledge, and patient management regarding hyperuricemia and gout. For this purpose, an anonymous survey was distributed by e-mail containing a link to a Surveymonkey ® webpage, with the survey used in this study. The e-mail was distributed to the official e-mail addresses of the majority of the eligible primary practices in the country. Furthermore, it was sent to all members of family medicine associations in Croatia via according mailing lists. The study was approved by the Ethics Committee of the Health Centre of the Split-Dalmatia County and the Ethics Committee of the University of Split School of Medicine. Responses were collected during May and June of 2020.
All primary care physicians working on the territory of the Republic of Croatia were considered eligible for inclusion in this study. Participation in the study was voluntary, and physicians received no benefits or compensation for participation. The survey used in the study did not gather data that could be used to reveal the identity of the physician.
Survey
After an extensive literature review that comprised all relevant information regarding knowledge, attitudes and management of hyperuricemia and gout in primary medicine practice, a survey was designed by 3 family physician specialists at the Department of Family Medicine, University of Split School of Medicine. The draft version of the survey consisted of 49 different items, of which 6 items gathered demographic data, 6 items for gout and asymptomatic hyperuricemia management characteristics, while 24 items assessed the physicians' knowledge, and 13 items were related to attitudes regarding hyperuricemia and gout management. Further evaluation by 2 additional family medicine specialists removed 8 knowledge questions due to the low intelligibility of used phrases, or potentially ambiguous answers. Additionally, a total of 5 attitudes items were removed also due to the low intelligibility of the used phrases, and due to the potential danger of revealing the participants' identities. The 6 items that gathered demographic data and 6 items about gout and asymptomatic hyperuricemia management characteristics were not changed in any way after this additional assessment. Finally, a 36-item survey divided into 4 parts was agreed upon and distributed among family physicians.
The first part of the survey consisted of 6 items, and it gathered demographic data including gender, age, work experience, qualification, total number of patients in care and population of work location area.
The second part consisted of 6 items as well, and it collected data about the total number of patients with hyperuricemia in care, the average number of patients with asymptomatic hyperuricemia per month, and the average number of patients with gout per month, the percentage of patients with hyperuricemia receiving drug treatment, the referral rate of patients with uric arthritis to a rheumatologist and the number of scientific papers about hyperuricemia read in the past year.
The third part was an evaluation of the participants' knowledge about hyperuricemia and gout and it consisted of 16 questions related to the management and inter-relation of hyperuricemia and gout. All questions were organized as multiple choice (MCQ), with only one correct answer among 5 offered choices.
The last part consisted of 8 statements that investigated attitudes regarding hyperuricemia and gout management which physicians were asked to rate on a 5-point Likert scale.
The final version of the survey was tested among 18 family physicians for readability, length and understanding of all included items and used phrases. None of the participants reported any difficulties in answering any of the used items.
Statistical Analysis
Sample size analysis was performed with the free Surveymonkey ® sample size calculator [15]. The population of family physicians that we investigated in the Republic of Croatia, according to the Croatian Institute for Health Insurance [16], was 2330. With a confidence interval of 95%, and a margin of error of 5%, the needed sample size was 330 family physicians.
Statistical analysis was performed using MedCalc software for Windows (v. 11.5.1.0; MedCalc Software, MedCalc Software Ltd, Mariakerke, Belgium). Results are presented as numbers (proportions) or the mean ± standard deviation where appropriate. The normality of continuous data distribution was tested with the Kolmogorov-Smirnov test. Student's t-test and one-way analysis of variance were used to determine the differences among physicians' knowledge scores relative to work experience, number of patients and the number of scientific papers covering the topic of hyperuricemia read in the past year. Finally, a multiple regression analysis adjusted for age and gender was used to assess the independent predictors of the total knowledge score. For this purpose, we used the forward selection algorithm, while unstandardized beta (β) coefficients, standard error (SE), t-value and p values were reported. The level of p < 0.05 was considered statistically significant.
Results
This study included a total of 336 primary care physicians (calculated response rate 14.4%), among which there were 275 (81.8%) women. Most of the physicians included in the study were 31-54 years old, and 14% of them were residents. Most physicians had 1500-2000 patients in care and had practices in areas with populations of under 50,000 people (Table 1).
Most physicians stated that they had around 5-10 cases of asymptomatic hyperuricemia per month in daily practice (N = 220, 65.5%), while most stated that they had on average one case of gout per month in daily practice (N = 217, 64.6%). Around 30% of physicians reported that less than 5% of their patients with hyperuricemia received pharmacological treatment. Around half of the physicians included in the study had not read a single scientific paper on asymptomatic hyperuricemia or gout in the past year (N = 152, 45.2%). Moreover, most of them (N = 198, 58.9%) never referred a newly diagnosed patient with uric arthritis to a rheumatologist ( Table 2).
Around 60% of physicians believe that guidelines for the management of patients with asymptomatic hyperuricemia would be of great assistance in their everyday practice. Furthermore, physicians (69.6%) greatly valued national referent values of serum uric acid levels as important cut-off margins for making decisions about starting pharmacotherapy in patients with asymptomatic hyperuricemia. More than half of physicians agreed that they are satisfied with their approach regarding their care of patients with asymptomatic hyperuricemia and gout. However, a significant proportion of physicians (42.3%) was not familiar with the European League Against Rheumatism (EULAR) evidence-based recommendations for the management of gout nor do they use them in everyday practice. Most physicians (67.2%) based their approach to patients with asymptomatic hyperuricemia on their clinical experience (Table 3). Physicians who had read at least one scientific paper covering the topic of hyperuricemia in the past year scored significantly higher in knowledge questions (N = 152, 6.5 ± 2.05 vs. N = 183, 7.04 ± 2.14, p = 0.002, range 0-16). Considering work experience, the greatest knowledge scores were found among physicians with 11 to 20 years of work experience (median = 7, IQR 6-9) (Figure 1). Physicians who had read at least one scientific paper covering the topic of hyperuricemia in the past year scored significantly higher in knowledge questions (N = 152, 6.5 ± 2.05 vs. N = 183, 7.04 ± 2.14, p = 0.002, range 0-16). Considering work experience, the greatest knowledge scores were found among physicians with 11 to 20 years of work experience (median = 7, IQR 6-9) (Figure 1). The overall knowledge score of physicians included in the study was 7 (IQR 5-8). Considering the number of patients, the greatest knowledge scores were observed among physicians who had fewer patients (median = 7, IQR 6-8; p = 0.017) (Figure 2). The overall knowledge score of physicians included in the study was 7 (IQR 5-8). Considering the number of patients, the greatest knowledge scores were observed among physicians who had fewer patients (median = 7, IQR 6-8; p = 0.017) (Figure 2).
Physicians were unsure about what should be considered asymptomatic hyperuricemia (3%) but were well informed about the non-pharmacological interventions for hyperuricemia and the drugs of choice in the treatment of gout or hyperuricemia. Only around half correctly identified drugs that could lower or elevate serum uric acid levels. Furthermore, the analysis of correct answers to specific questions further showed poor understanding of the pathophysiology of hyperuricemia and possible risk factors (Table 4).
Multiple regression analysis showed that younger physicians with less patients in care were more likely to score higher in knowledge (p = 0.002; Table 5). Physicians were unsure about what should be considered asymptomatic hyperuricemia (3%) but were well informed about the non-pharmacological interventions for hyperuricemia and the drugs of choice in the treatment of gout or hyperuricemia. Only around half correctly identified drugs that could lower or elevate serum uric acid levels. Furthermore, the analysis of correct answers to specific questions further showed poor understanding of the pathophysiology of hyperuricemia and possible risk factors (Table 4).
Discussion
The results of this study show a modest knowledge of asymptomatic hyperuricemia and gout management among primary care physicians in Croatia. Physicians were most informed about treatment options for asymptomatic hyperuricemia and gout but showed a poor understanding of the underlying pathophysiology and risk factors. In comparison, in a study conducted among primary care physicians in Saudi Arabia, knowledge scores were 3% for mechanisms and 62.7% for dietary recommendations [17]. In the present study, 88.1% knew the non-pharmacological approach to the management of hyperuricemia while 31.1% correctly identified the most common reasons for elevated serum uric acid levels. In a study by Kostka-Jeziorny et al., physicians showed poor awareness of the relationship between hyperuricemia and ischemic heart disease or chronic kidney disease [18].
Hyperuricemia indicates an increased risk of diabetes mellitus or metabolic syndrome and it may lead to hypertension and cardiovascular disease [19,20], therefore it is essential to provide these patients with adequate care. Elevated serum uric acid has been associated not only with several cardiovascular risk factors but also with an increased cardiovascular mortality in the general population [21,22] and after acute myocardial infarction [23,24]. Moreover, in the setting of acute myocardial infarction, hyperuricemia has been linked with an increased inflammatory response [25]. Less than 5% of physicians correctly defined asymptomatic hyperuricemia and were able to identify the goal or main reason for treating hyperuricemia. Our results are consistent with previous research, as studies show that physicians tend to underestimate the effect of hyperuricemia in patients with a high risk of cardiovascular disease [1]. Moreover, a recent study from Poland revealed that a relatively small proportion of physicians are aware of the recommendations for the treatment of hyperuricemia in patients with high cardiovascular risk [18].
A study from Japan showed that only around 10% of patients with hyperuricemia were diagnosed with asymptomatic hyperuricemia in daily practice [3,4]. Most physicians in this study reported having 5-10 cases of asymptomatic hyperuricemia per month in their practice. It is possible that a proportion of cases remains undiagnosed in the primary care setting. Greater understanding of the pathophysiology, possible causes, related conditions and eventual risks of hyperuricemia may raise awareness about the importance of identifying and monitoring patients with hyperuricemia in primary care.
Moreover, the results of this study imply that primary care physicians with fewer patients in care have a greater knowledge of treating these patients. The reason for this is likely a smaller load on these physicians that gives them more time and freedom for education and more thorough patient evaluation. Furthermore, the greatest knowledge scores about of hyperuricemia management were observed among physicians with 11-20 years of work experience. These physicians are young specialists. The results show greater knowledge among physicians who have read at least one scientific paper on this topic in the past year, further stressing the importance of continuous medical education among primary care physicians.
Although most physicians stated that guidelines for the management of patients with asymptomatic hyperuricemia would be of great assistance in their everyday practice, many were also satisfied with their approach to treating patients with asymptomatic hyperuricemia or gout. Considering the modest knowledge among primary care physicians, there is an obvious need for guidelines and education in this area. Not many physicians were aware of EULAR evidence-based recommendations for the management of gout, hence not many used them in everyday practice. These findings are in accordance with previously published research. The lack of education leaves many physicians perceiving gout as an acute disease and offering patients painkillers when necessary rather than long-term medication [26].
Unfortunately, efforts at educating primary care physicians to manage gout effectively and to educate their gout patients sufficiently have not been successful. This issue is aggravated by the fact that gout is mostly managed in primary care and that rates of adherence to urate lowering therapies are 50% or less, worse than most other chronic illnesses [27].
Less than 5% of physicians reported that more than 60% of their patients with hyperuricemia receive pharmacological treatment. A diet with a reduced intake of purine rich foods may reduce urate levels. However, the effects of such an approach are modest and limited, since most patients with hyperuricemia have genetically determined low urate excretion [5,28]. Still, asymptomatic hyperuricemia should generally not be treated, as most of these patients will not develop gout [29]. In the present study, we identified that roughly 88% of physicians are familiar with the non-pharmacological approach to hyperuricemia treatment. However, the results also imply that physicians tend to over-estimate the likely effect of these treatments with only 29.8% providing a correct answer to the question about the effect of non-pharmacological treatment options for lowering hyperuricemia.
In the management of patients with this condition, primary care physicians should be aware of drugs that may raise urate levels [30]. However, in the present study, 62.8% correctly identified medications that may elevate serum uric acid levels. Since patients with hyperuricemia may present with a number of comorbidities [5], it is important that their physicians are aware of the optimal treatment choice available.
Limitations
This study was survey based and anonymous; however, participants may have provided socially desired answers to some of questions. The survey used in the study was not validated in different populations. This should be taken into consideration when interpreting the results of the study. Additionally, it should be stressed that the data on the number of patients was self-assessed by physicians and was not retrieved from official data records.
Conclusions
This study identified gaps in primary care physicians' knowledge essential for the adequate management of patients with asymptomatic hyperuricemia and gout. Primary care physicians are not aware of the risks associated with elevated serum uric acid levels and are in a large proportion unable to correctly identify drugs that may lead to elevated serum uric acid levels. Among physicians in Croatia, not many are familiar with the EULAR guidelines but think that they would benefit from guidelines for the management of such patients. As hyperuricemia and gout are among the fastest rising non-communicable conditions, greater awareness about the available guidelines and more education about the causes and risks of hyperuricemia among primary care physicians may contribute to a reduction in the development of other diseases that have hyperuricemia in their background. In the present study, we identified that 62.8% of physicians correctly identified drugs that elevate serum uric acid levels and 47% correctly identified drugs that lower serum uric acid levels. In addition, the results implicate that physicians are unable to provide an accurate estimate of the effect of non-pharmacological interventions for the treatment of hyperuricemia and gout. For future studies it would be of great interest to see the percentage of physicians who would prescribe the different therapeutic options (urate lowering therapy, non-steroidal anti-inflammatory drugs, colchicine, corticosteroids) for patients with asymptomatic hyperuricemia considering patient's comorbidities as well as to further investigate knowledge about the effect of diet and other non-pharmacological interventions in patients with hyperuricemia. Informed Consent Statement: Informed consent was not obtained due to fact that survey was anonymous and did not gather data that could be used to reveal the identity of the participants.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical restrictions.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-02-13T06:16:37.768Z
|
2021-01-30T00:00:00.000
|
{
"year": 2021,
"sha1": "ecb995f4fdd937748a2176e255434384fd968307",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7908186",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "f571ea6c6d1b2fec9693b001c556b85f55fc220c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4857021
|
pes2o/s2orc
|
v3-fos-license
|
Applying Unique Molecular Identifiers in Next Generation Sequencing Reveals a Constrained Viral Quasispecies Evolution under Cross-Reactive Antibody Pressure Targeting Long Alpha Helix of Hemagglutinin
To overcome yearly efforts and costs for the production of seasonal influenza vaccines, new approaches for the induction of broadly protective and long-lasting immune responses have been developed in the past decade. To warrant safety and efficacy of the emerging crossreactive vaccine candidates, it is critical to understand the evolution of influenza viruses in response to these new immune pressures. Here we applied unique molecular identifiers in next generation sequencing to analyze the evolution of influenza quasispecies under in vivo antibody pressure targeting the hemagglutinin (HA) long alpha helix (LAH). Our vaccine targeting LAH of hemagglutinin elicited significant seroconversion and protection against homologous and heterologous influenza virus strains in mice. The vaccine not only significantly reduced lung viral titers, but also induced a well-known bottleneck effect by decreasing virus diversity. In contrast to the classical bottleneck effect, here we showed a significant increase in the frequency of viruses with amino acid sequences identical to that of vaccine targeting LAH domain. No escape mutant emerged after vaccination. These results not only support the potential of a universal influenza vaccine targeting the conserved LAH domains, but also clearly demonstrate that the well-established bottleneck effect on viral quasispecies evolution does not necessarily generate escape mutants.
Introduction
Influenza A viruses (IAVs) remain a major public health concern, causing severe respiratory tract infections, especially in young children, the elderly, and patients with chronic and multiple morbidities. Seasonal epidemics can cause high mortality throughout the world [1]. Antiviral drugs for the treatment of influenza infections must be administered early during IAV infection and provide only partial reduction in disease severity. High levels of anti-drug resistance further undermine their application [2]. Therefore, vaccination is the most efficient and cost-effective public health intervention against IAV infections.
IAV vaccine strategies rely on a protective antibody response against the hemagglutinin (HA) and neuraminidase (NA). Both of these surface glycoproteins are highly variable among IAVs circulating in humans and animals [3]. Current vaccines induce only a narrow and strain-specific immunity. Because of antigenic drift and shift [4], they have to be reformulated for every season to match the circulating strains. It is therefore essential to improve current vaccines and to develop drift resistant strategies providing a broad and lasting protection. Such universal IAV vaccines are based on conserved domains of viral proteins. Usually these domains are less exposed to the host immune system and are less immunogenic resulting in less immune pressure-derived antigenic changes [1,5]. In contrast to the conserved influenza matrix protein and nucleoprotein epitopes, which provide only weak protection in human challenge studies [1], the conserved stalk portion of HA is a much more potent candidate for a universal vaccine [1,5,6]. The highly conserved long α-helix (LAH) of the stalk domain spanning amino acid (aa) 76-130 has been shown to induce broadly reactive and protective Ab responses [7][8][9].
Due to an error-prone replication machinery, IAVs can easily adapt to immune pressure. Therefore, it is of interest to monitor the emergence of escape mutants in particular against otherwise conserved epitopes [10][11][12][13]. In the present study, we used next generation sequencing (NGS) to investigate the impact of antibody mediated immune responses against LAH on strain diversity within the targeted region [7,8]. Unlike conventional techniques, NGS provides a direct and extensive snapshot of in vivo viral quasispecies [14][15][16]. Moreover, we incorporate unique molecular identifiers in our approach which has been previously shown to dramatically lower the sequencing error rate and to improve the sensitivity for detection of variants with very low frequencies [17,18]. Focusing on the LAH domains, we showed that there was a constrained evolution of viral quasispecies after vaccination and no emergence of a new major virus variant. These findings demonstrate that LAH is a promising candidate as a safe and potent universal influenza vaccine target.
Mice and Viral Infection
Female BALB/c mice were purchased from Harlan Laboratories, Inc. (Horst, The Netherlands). 8-week-old BALB/c mice were immunized 3 times intraperitoneally within 2 week intervals with LAH-HBc chimeric proteins containing LAH of pH1N1/09 (A/Luxembourg/46/2009, pH1N1) and MF59/AddaVax TM (Invivogen, Toulouse, France). The LAH domain from pH1N1/09 has been incorporated into hepatitis B virus core protein (HBc) using recently developed tandem core technology [9]. The LAH-HBc chimeric proteins were produced in E. coli BL21 as described previously [19], and purified using two consecutive size exclusion chromatography steps. Mock group mice received adjuvant only. Two weeks after the final immunization, mice were challenged with 5 × 50% Mouse Lethal Dose (MLD50) of pH1N1/09 intranasally, after anesthesia with isoflurane [20]. Mice were sacrificed 5 days post infection (dpi) (n = 3 for Mock, n = 6 for Immunized) or 7 dpi (n = 2 for Mock, n = 3 for Immunized) for organ removal ( Figure 1A). Animals with a 20% weight loss were sacrificed to conform with humane endpoint recommendations.
Ethics Statement
All mouse experiments were performed in accordance with protocols approved by the Animal Welfare Structure of Luxembourg Institute of Health and by the Minister of Agriculture, Viticulture and the Consumer Protection of the Grand Duchy of Luxembourg (Ref. LNSI-2014-02, permission date (1 September, 2014)).
ELISA
Sera from individual mice were collected 10 days after the final immunization and were tested for seroconversion. LAH-specific IgG in mouse serum were measured by indirect ELISA. A free polypeptide covering the entire sequence of the LAH region from pH1N1/09 virus (amino acids 76-130) was synthesized with a MultiPep RS peptide synthesizer (IntavisAG, Tuebingen, Germany) by a modified SPOT synthesis protocol. Wells of 384-well microtiter plates (Greiner, Diegem, Belgium) were coated overnight at 4 • C with 20 µL/well of 2.5 µg/mL resuspended LAH polypeptide or purified HA in carbonate buffer (100 mM, pH 9.6), or with carbonate buffer alone as a background control. All subsequent steps were performed at room temperature. Wells were washed sequentially in washing buffer (Tris containing 1% Tween 20) and blocked for 2 h with 1% BSA in Tris buffer. After washing, sera (starting 100-fold dilution) were added, incubated for 90 min, and washed. Bound IgG was detected using alkaline phosphatase (ALP) conjugated goat anti-mouse IgG (1/750 dilution, ImTec Diagnostics, Antwerp, Belgium). Color reactions were developed using 2-amino-2-methyle-1-propanale. Absorbance was measured at 405 nm (Spectromax Plus, Sopachem, Eke, Belgium). Purified HA for A/California
RNA Extraction and Library Preparation
RNA was extracted from wildtype virus and from the supernatant of the homogenized lungs using the QIAamp Viral RNA Mini kit (Qiagen, Hilden, Germany). The quality of the extracted RNA in the final eluent of 60 µL was analyzed employing an Agilent Bioanalyzer 2100 (Agilent Genomics, Santa Clara, CA, USA). The libraries for sequencing were prepared following an adapted protocol from Buerckert et al. (in preparation), Kinde et al. [17], Vollmers et al. [22], and Loman et al. [17,22,23] ( Figure 3A). Reverse transcription was performed on 100-200 ng virus RNA (NanoDrop, Thermo Scientific, Waltham, MA, USA) with Superscript IV (ThermoFisher), following the provided protocol. The primer for reverse transcription (5 CACAGTTCACAGCAGTAGGTAAAGA 3 ) was linked to a unique identifier (UID) consisting of 14 random nucleotides which enables the recognition of every original mRNA strand after amplification. The primer was connected to four different mouse identifiers (MIDs) to allow pooling samples from different mice on a single sequencing chip. Furthermore, the primer contained a short sequence of IonTorrent A-Adaptor. After the reverse transcription the second strand synthesis was performed with Phusion enzyme (NewEngland BioLabs, Ipswich, MA, USA) on half the outcome of reverse transcription. The reverse primer (5 ATTGCCCCCAGGGAGACTAC 3 ) was connected to a short sequence of IonTorrent P1-Adaptor (2 98 • , 2 50 • , 10 72 • ). After second strand synthesis and before amplification, the samples were purified twice with Agencourt AMPure XP beads (Beckman Coulter, Suarlée, Belgium) at a ratio of 1:1. For the final amplification step of the library Q5 enzyme (NewEngland BioLabs, Ipswich, Massachusetts) in combination with a primer mix of A-Adaptor and P1-Adaptor was used (5 98 • , 20× (10 98 • , 20 65 • , 30 72 • ), 2 72 • ). The finished library was purified once with Agencourt AMPure XP beads before being analyzed for both quality and quantity on Agilent Genomic's Bioanalyzer 2100 (High sensitivity chip) before deep sequencing. The amount of library input RNA for NGS was determined by the reciprocal library output quality of 150 fluorescence units on Agilent Genomic's Bioanalyzer 2100 (Table S1). For each sample, libraries with different amounts of RNA input were prepared. Those libraries that showed similar end concentrations were chosen to be sequenced. This approach allowed us to control a similar RNA input between different samples. In combination with UID barcoding, this allow for the exclusion of a possible effect of differences in viral titers on the rate of false positive variant calls between the groups [18].
High-Throughput Next Generation Sequencing
Libraries of two immunized and one mock mouse were pooled to be sequenced on the same chip. Wildtype virus libraries were done in triplicates and pooled together. Template preparation was done using the Ion PGM™ Template OT2 400 Kit (Life Technologies Europe BV, Gent, Belgium). For sequencing, the Ion PGM™ Sequencing 400 Kit (Life Technologies Europe BV, Gent, Belgium) and Ion 316/318 Chip Kits v2 (Life Technologies Europe BV, Gent, Belgium) were used. For all these steps protocols were followed as indicated by LifeTech (Paisley, UK).
NGS Data Analysis
Trimmed BAM files were exported from the IonTorrent platform. An in-house pipeline adapted to the IonTorrent technology was built to filter out poor quality reads and to correct for homopolymer errors ( Figure 3B and refer to the details below). In order to normalize for differences between sequencing runs, similar to the principle of quantile normalization [17,24], we compared the percentage of reads with a minimum coverage of 3 (it was chosen because consensus building requires at least 3 copies) for every chip. A minimum UID copy number for a sequence to be included into data analysis per chip was calculated, so that the same percentage of reads (33%) from every chip would be used. The threshold of 33% was determined by checking for every chip the percentage of reads with a copy number greater or equal to 3 and then selecting the lowest percentage of all chips. To build a reliable consensus sequence from only 3 copies for a given UID, a minimum of 2 need to be the same to define a reliable nucleotide read. The thresholds were calculated individually for every degree of coverage using the cumulative binomial distribution, with q = 0.015625, and α = 1 − q. Cumulative binomial distribution was used to create a list that assigns the copy number needed to always reach the same probability to every possible copy number/UID.
Trimmed BAM files exported from IonTorrent platform were imported. In a first quality filtering step, reads with 20% less than the expected length were filtered out and only reads which had more than 95% of nucleotide positions with a minimum quality score of 20 were kept for further analysis. Then, after splitting the reads from the different samples according to their MIDs, they were grouped into reads originally coming from the same mRNA template input by using the UIDs. Errors in poly-A-sequences were corrected to forestall the most commonly reported indel errors in IonTorrent sequencing by replacing homopolymeric regions with the correct number of nucleotides (as in the virus reference sequence) [25,26]. Sequences were cut to include only the epitope (corresponding to HA amino acids 420-474) that the vaccine construct targets. Within each selected UID family ( Figure 3B), we aligned reads using MAFFT (a multiple sequence alignment program) and built a consensus sequence using Biopython (gap_consensus). Geneious and Bioedit were used to convert the nucleotide sequences into amino acid sequences and to study mutations. The Shannon Diversity index was calculated [25] to determine diversity of quasispecies population. All sequences, including the ones having frameshifts or stop codons (1-2% of final sequences) were included in the diversity calculations. The heat map showing each diversified amino acid position on the LAH epitope was generated using R program. Values for each mutation have first been normalized by determining the percentage of total consensus sequences they make up in the individual samples before a clustering analysis. An overview of the sample properties for NGS is shown in Table S1. Details of amino acid sequences in each sample are listed in Tables S2 and S3.
Statistical Analysis
In GraphPad Prism 5 (San Diego, CA, USA) multiple t-tests, unpaired t-test, one-way ANOVA followed by Tukey's as post-hoc test and Benjamini and Hochberg correction were used to determine statistical significance. A p value less than 0.05 was considered as significant.
Induction of Broadly Reactive Anti-LAH Antibodies by Immunization
To elicit antibodies against the influenza virus LAH domain, BALB/c mice were immunized three times with LAH-HBc chimeric proteins at two week intervals ( Figure 1A). Seven days after the third immunization, all immunized animals showed high titers of antibodies against a synthetic LAH peptide, while the mock group showed no reactivity (Figure 2A). The LAH antisera reacted not only with the homologous H1 HA protein but cross-reacted also with multiple group 1 and group 2 HA proteins, including H2, H3, H5, H7, and H9 ( Figure 2B,C). Thus, LAH-HBc chimeric proteins induced broadly reactive anti-LAH serum against both group 1 and 2 HA proteins.
Reduced Lung Virus Titers in Immunized Mice
After immunization, animals were challenged intranasally with 5× MLD50 of IAV and the viral load was measured in the lung at five and seven days post infection (dpi). A significant difference in virus lung titers emerged at 7 dpi between immunized and mock animals ( Figure 2D). In parallel, another group of animals receiving the same immunization and viral challenge were maintained for survival observation. As shown in Figure S1, LAH-HBc chimeric protein immunized mice were significantly protected against lethal challenge with pH1N1 and H3N2 viruses while in the mock group all animals died.
Decreased Diversity of LAH Epitopes
We further characterized the virus on a population level at 5 dpi and 7 dpi using NGS ( Figures 1B and 3). The overall diversity of viral quasispecies in the LAH domain in the different groups was calculated using the Shannon Diversity Index. The wildtype virus expanded in vitro in MDCK cells showed the highest variability within the tested region ( Figure 4A). At 5 dpi, viruses from the lung of immunized mice showed significantly less variability than those of the mock group. This was no longer true at 7 dpi ( Figure 4A). These results demonstrate that in the animals the virus loses diversity and that this effect is more dramatic in immunized mice than in the control group. Inversely, when comparing the diversity of nucleotides with amino acids, the ratio was significantly increased at 5 dpi in the immunized mice compared to the wildtype virus and mock animals ( Figure 4B). Thus, in the immunized animals the virus has more synonymous mutations compared to the one from the wildtype virus and the mock group. Figure 5B and Figure S2A. (B) Nucleotide to amino acid entropy ratios of the Shannon Diversity calculated for each sample. * p < 0.05, ** p < 0.01, and *** p < 0.001. Error bars represent SEM.
Absence of Escape Mutants Following Vaccination
To understand the observed differences in quasispecies diversity, we examined the missense mutations in the different groups. In order to select relevant mutations only, we focused on those that appeared in at least 3 of the 18 samples. With this approach we found 212 possible amino acid exchanges within the analyzed epitope, which covered all 55 positions. A heat map was generated to compare the occurrence of these missense mutations between the groups ( Figure 5A). Hierarchical clustering based on these occurrence levels resulted in three major clusters, one composed only of wildtype virus samples (cluster 1), a second one composed only of samples from immunized mice (cluster 2), and a third one containing all mock samples and two of the 7 dpi samples from the immunized group (cluster 3) ( Figure 5B). This observation could be further confirmed by principle component analysis (PCA) to the frequencies of mutants among different groups ( Figure S2A). The marked two 7 dpi samples showed a similar level of diversity as in the mock mice ( Figure 4A and Figure S2A). An overall gradual decline in mutated virus sequences measured at 5 dpi can be observed when comparing the wildtype virus, mock, and immunized samples ( Figure 5C), confirming again the differences in diversity observed between the groups ( Figure 4A).
Identification of Diversified Positions on LAH Epitope
In order to extract the most significantly mutated positions of the analyzed epitope, the Shannon Diversity index was calculated for every amino acid position across the LAH epitope. In this way, 11 different positions were identified to reach statistical significance when comparing the groups using a two-tailed student's t-test ( Figure 6A). At each of these positions, except for position 123, the immunized group showed a lower or similar aa diversity as compared to the mock mice. After Benjamini and Hochberg correction, the diversity in amino acid positions 85 and 96 was still significantly higher in the cultured virus than that in the in vivo virus populations. Variability in these two positions were the major contributors to the differences in diversity observed between the groups ( Figure 4A) as confirmed by PCA analysis on the frequency of mutants ( Figure S2A). Positions 121 and 124 seem to be generally more diverse but with little differences among groups ( Figure 6A). The PCA analysis on the Shannon Diversity per position showed differences between in vitro (Wildtype Virus) and in vivo (Mock and Immunized) samples ( Figure S2B). However, the differences between Mock and Immunized mice were not clearly distinguishable by using the Shannon Diversity index alone ( Figure S2B), indicating the need for more detailed analysis on the amino acid substitutions.
Analysis of Amino Acid Substitutions
To identify the mutations that were selected by the vaccine-induced immune pressure, all the identified 212 amino acid exchanges were compared between the Immunized and Mock groups. Notably, the frequencies of the mutation D85N retained significant difference between the two groups following the Benjamini and Hochberg correction. The mutation D85N has an average number of reads of 97.8/sample (average frequencies: 0.86%) in the immunized group vs. 474.2/sample (average frequencies: 3.09%) in the control group ( Figure 6A,B). This amino acid showed a significantly reduced frequency in the immunized animals ( Figure 6B). No mutant variant emerged that replaced the dominant viral sequence in any of the samples. There were, however, significant differences in the occurrence of the dominant viral sequence between groups (Mock: 93.70 ± 1.01%; Immunized: 96.76 ± 1.40%; p = 0.01 after the Benjamini and Hochberg correction, Figure 6B), suggesting a constrained viral quasispecies evolution in the immunized group.
Discussion
In the present study, we applied NGS to follow the in vivo evolution of IAV quasispecies in response to our novel HBc-based LAH fusion protein. This new LAH targeting vaccine induced a strong antibody response and protected against both homologous (pH1N1) and heterologous IAV (H3N2) strains in mice. Interestingly, after vaccination, the virus quasispecies showed a significantly reduced variability in amino acid sequences of the LAH domain and no escape mutant emerged.
The analysis of quasispecies is complicated by the need to reduce technical nucleotide substitution artefacts to a minimum. Therefore, we chose the IonTorrent platform which in contrast to other NGS technologies has a low propensity for substitution errors (0.085/100 bp) [27]. On the other hand, this platform is prone to indel errors (0.6/100 bp), which, however, can be reliably removed when the corresponding reads are compared with a reference sequence. To further reduce substitution errors, we applied a UID barcoding technique that allows to trace back individual RNA strands [17,23]. This technique also enabled us to pool samples from different groups on a single sequencing chip to minimize differences caused, for example, by chip loading. Inter-chip normalization further improved the comparability of different samples measured on different chips [28]. Compared with the other NGS approaches, Kinde et al. [17] and Peng et al. [18] have reported a dramatic reduction in the sequencing error of about 20 folds with a tagging method that is based on labeling nucleotide fragments with primers containing 14-bp UIDs. This UID approach allows for detecting as low as 0.001% mutations per base pair and has the ability to detect 1% mutations with minimal false positives. Although our sequencing approach covered the complete LAH, like most of the other NGS techniques it does not allow to reconstitute longer sequences (e.g., of the whole HA), in particular when the variability is low. Because of this limitation it would not have been possible to link mutations within LAH with other concurrent HA mutations. Sequencing the whole HA would have provided information about possible escape mutations outside of the targeted epitope region, but current NGS techniques do generally not allow to constitute the genome on a single strand so that mutations that occur on different amplicons cannot be related to each other in a straightforward way.
We used a mouse infection model to monitor the changes of lung virus quasispecies after vaccination against LAH, i.e., under the pressure of protective LAH antibodies. Unvaccinated animals failed to control IAV replication in the lungs and died at 7 dpi due to pulmonary damage. In response to the LAH vaccine, viral replication in the lungs became detectable at 3 dpi, peaked at day five, and began to decrease at day seven. This is in line with previous observations that a short but rapid virus proliferation in mice with a stronger immunity leads to enhanced activation of the acquired immune system, which ultimately results in rapid virus clearance [29][30][31]. Using Fcγ chain-deficient mice, DiLillo et al. [32] has shown the mice that received the broadly neutralizing HA stalk-specific antibodies treatment showed minimal weight loss compared to PBS-treated mice at day 7 and such an effect requires FcγR interactions for protection against influenza virus in vivo. Thus, our observation on the significant reduction of lung viral titer at day 7 might be associated with the effect of virus-specific cytotoxicity mediated by LAH-specific antibodies. A bottleneck effect [16,[32][33][34], i.e., a reduction in viral diversity concomitant with emerging escape mutants, is often observed in viral evolution studies under host immune selection pressure in humans. Here we also indeed observed a reduced diversity in the LAH domain at 5 dpi, but unexpectedly, without any emerging escape mutant. Our analysis showed that, in the immunized animals at 5 dpi, the virus has more synonymous mutations. This is indicative of an ongoing adaptation process in these mice which generates higher genetic diversity with higher mutation rates in nucleotide sequences albeit under structural or functional constraints [35]. It looks like the virus tried to generate a higher genetic diversity, but only mutant variants without changes at the amino acid level were able to survive. Our observation that mutations are unfavorable in LAH domain may be due to their important role for the conformational changes of HA that are induced by low pH during fusion between the viral envelope and the host endosome membrane.
We observed one mutation that showed a conspicuous and significant difference in prevalence between the immunized and the mock group: D85N in the heptad repeat domain [36] was significantly lower in the immunized group. In addition, previous studies have shown that it is common in nature for influenza virus variants to possess shorter length of genes due to premature termination [37][38][39]. Indeed, we observed 1-2% of final sequences containing frameshifts or stop codons. Furthermore, we also found an unexpected preferential proliferation of viruses with non-mutated LAH in the immunized mice. This suggests that the virus failed to generate mutations in the LAH domain that were compatible with virus fitness. The absence of major non-synonymous substitutions within this region indicates that the amino acid composition is highly restricted, leaving little room for escape mutants with sufficient viral fitness to develop. This is in line with the extensive conformational changes that the LAH undergoes during viral proliferation [36]. Consistent with our notion, Chai et al. [40] have reported that under the very focused pressure of a protective monoclonal antibody against LAH, escape mutants can develop in vitro, but these suffer from a reduced fitness both in vitro and in vivo. Although mutant variants increased again in proportion at the later timepoint, no new mutant variant dominated the virus population. The proportional increase in mutant variants from 5 dpi to 7 dpi in the vaccinated group could be explained by the observed decrease in total virus titers at the later stage ( Figure 2D). Another possible explanation for this phenomenon could be an unequal pressure on the different mutant variants as a result of a decline in the broadly reactive CTL response [41,42].
Other mutations may have occurred in other regions of HA outside of the targeted LAH domain, that may protect the virus against LAH antibodies. In our approach, we focused first on the targeted epitope as proof of concept. Synergetic mutations using this approach cannot be detected and further studies sequencing the whole genome to confirm and extend our findings will be of interest. Moreover, because of the above difficulties to link them to LAH mutations, it would have been challenging to interpret these. The conventional seasonal split vaccine induces a much larger panoply of protective antibodies against the HA head [24] and none against the HA stalk or the LAH domain [1]. Similarly, the seasonal vaccine is also not known to favor mutations in this domain. In any case, at 7 dpi animals immunized with the split vaccine have essentially cleared the challenge virus, further complicating any comparison between the split and the LAH vaccine. Therefore, in this work we did not include a split vaccine as an additional control. In the future, universal vaccines will have to be judged in comparison to seasonal vaccines so further studies are needed to determine similarities and differences.
Altogether, we observed a bottleneck effect in terms of variability [16] and no increase in sequence diversity, nor the displacement of the dominant virus strain after vaccination against LAH or the development of an escape mutant. These findings provide further support for LAH as a potential vaccine candidate and as proof of concept for a broadly protective influenza vaccine without enhanced propensity for escape mutants.
|
2018-04-03T00:22:37.001Z
|
2018-03-25T00:00:00.000
|
{
"year": 2018,
"sha1": "c04c3d479001921150903edf0fa16fe2062377b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/10/4/148/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72f73c7b6cb6b64e411e82db18e5457f09ba7b46",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
221997798
|
pes2o/s2orc
|
v3-fos-license
|
Effects of polymorphisms in the endothelin receptor B subtype 2 gene on plumage colour in mule ducks
Copyright resides with the authors in terms of the Creative Commons Attribution 4.0 South African License. See: http://creativecommons.org/licenses/by/4.0/za Condition of use: The user may copy, distribute, transmit and adapt the work, but must recognize the authors and the South African Journal of Animal Science. ______________________________________________________________________________________ Abstract The aim of the present study was to investigate the effect of a single nucleotide polymorphism (SNP) of the endothelin receptor B subtype 2 (EDNRB2) gene on plumage coloration in mule ducks. Test mating (white Tsaiya × white Muscovy ducks) in combination with polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) was performed to investigate the effect of non-synonymous SNPs in two maternal lines (a conservation and a selection population) on plumage coloration in mule ducks. One non-synonymous SNP (c.995G>A) was identified in white Muscovy ducks and white Tsaiya ducks by PCR-high-resolution melting (PCR-HRM) and DNA sequencing. Genotyping results showed that the c.995G>A locus is associated with plumage colour in two maternal populations of white Tsaiya ducks. Further, the maternal genotype of c.995G>A SNP affects the plumage colour of mule ducks. Therefore, the polymorphisms within the EDNRB2 gene at c.995G>A in white Tsaiya ducks may be used in marker-assisted selection to improve the plumage colour of mule ducks. _____________________________________________________________________________________
Introduction
Animal coats and plumage colour are influenced by melanocyte development, pigment production and pigment distribution (Emaresi et al., 2013). Melanoblasts, the precursors of melanocytes, are derived from neural crest cells. Melanoblasts are able to migrate from the neural crest to the developing feather follicles of the epidermis and then to differentiate into melanocytes (Mills et al., 2009). Several genes have been identified that are involved in melanocyte differentiation, proliferation, migration, and subsequent pigmentation .
Single nucleotide polymorphism variations in the EDNRB2 gene influence plumage colour in avian species (Miwa et al., 2007;Kinoshita et al., 2014;Li et al., 2015;Wu et al., 2017). It has been reported that Cys244Phe and Arg332His amino acid substitutions in the EDNRB2 gene are associated with plumage colour in Japanese native chickens (Kinoshita et al., 2014), Japanese quail (Miwa et al., 2007), and common ducks (Anas platyrhynchos) (Li et al., 2015). In Muscovy drakes (Cairina moschata), the SNPs in EDNRB2 gene that are associated with plumage colour differ from other avian species (Wu et al., 2017). However, little is known about the relationship between EDNRB2 gene polymorphisms and plumage coloration in intergeneric hybrids.
Mule ducks are sterile intergeneric hybrids, which are produced by crossing female common ducks (Anas platyrhynchos) with Muscovy drakes (Cairina moschata). The mule duck is a major meat-type duck in Taiwan (accounting for 77% of duck production) and the white feather colour has economic advantages in comparison with black because of consumer preferences. Since the mule duck is an infertile hybrid, breeding strategies have to be performed on the parental generation, particularly maternal lines. The authors' recent study demonstrated that maternal melanocortin 1 receptor SNP genotypes have a significant influence on white plumage coloration in mule ducks (Tu et al., 2019). Previous studies demonstrated that polymorphisms of EDNRB2 gene are associated with plumage coloration in avian species (Miwa et al., 2007;Kinoshita et al., 2014), including the parental lines of the mule duck (Li et al., 2015;Wu et al., 2017). However, the potential association between EDNRB2 gene and plumage colour phenotypes in intergeneric hybrids such as mule ducks has not been validated. Therefore, the aim of the current study was to analyse the association between the polymorphisms of EDNRB2 gene and plumage colour in mule ducks.
Materials and Methods
Research on animals was conducted according to the institutional committee on animal use (IACUC Approval No. LRIIL IACUP 105002 and 106004). To identify SNPs of the EDNRB2 gene, blood from six white Tsaiya ducks (conservation population) and six Muscovy drakes were subjected to genomic DNA extraction. Seven primer pairs targeting SNPs of the EDNRB2 gene were evaluated using the high-resolution melting (HRM) assay. Primers were designed based on the exon sequence of the EDNRB2 gene from the common mallard (NCBI reference sequence: KP203838) using Vector NTI 9.1 (Thermo Fisher Scientific, Waltham, MA, USA) software (Table 1). Polymerase chain reaction was performed with G-Storm GS4 thermal cycler (GRI, Rayne, Braintree, UK) and One Taq ® polymerase (New England BioLabs, Beverly, MA, USA). The PCR condition was pre-denaturation at 95 °C for 5 min, followed by 35 cycles of 95 °C for 30 seconds, 55 -64 °C for 30 seconds, 72 °C for 30 seconds, and post-elongation at 72 °C for 7 min. The amplified DNA was melted in the high-resolution melting device (HR-1 instrument, Idaho Technology, Salt Lake City, UT, USA) using LCGreen Plus melting dye (Idaho Technology, Salt Lake City, UT, USA). High-resolution melting curve acquisition was performed from 40 °C to 95 °C in 0.2 °C increments for 1 second, and then normalized with LightScanner Software with CALL-IT 2.0 (Idaho Technology, Salt Lake City, UT, USA). After screening the EDNRB2 gene by HRM analysis, the samples from two ducks with the greatest differences were chosen for further sequencing. The PCR product was purified using the QIAquick PCR purification kit (Qiagen, Valencia, CA, USA) and sequenced with ABI Prism 3700 DNA sequencer (Thermo Fisher Scientific, Waltham, MA, USA). Sequences were analysed with AlignX software (Thermo Fisher Scientific, Waltham, MA, USA) to identify possible polymorphisms. Based on sequencing results, PCR-restriction fragment length polymorphism (RFLP) analysis was then used to genotype each SNP in the EDNRB2 gene coding regions in white Tsaiya ducks and mule ducks ( Table 2). The enzymatic reactions were performed in 10 μL reaction mixtures containing 8 μL of PCR products, and two units of each restriction enzyme and reaction buffer. The digested products were separated by 3% agarose gel electrophoresis and then visualized with ethidium bromide. For test mating, fifteen female white Tsaiya ducks (Anas platyrhynchos) from a conservation population (18th generation of a natural mating programme without selection) and 19 female white Tsaiya ducks from a selection population (30th generation of a programme that selected for plumage colour using traditional phenotype-based estimated breeding) were mated by artificial insemination with pooled semen from three white Muscovy drakes (Cairina moschata). This mating experiment was part of the authors' previous study (Tu et al., 2019). The plumage colour of the mule ducks was graded based on the area of black spots on the head and back according to a previous study (Lee & Kang, 1997;Tu et al., 2019). Grades 1 to 3 indicated ducks with a black spot on the head; 4 to 7 indicated ducks with a black head and a little spot on the back; 8 to 10 indicated ducks with a black head, black back and black tail; and 11 to 15 indicated ducks from a mottled coat to pure black. Venous blood and tissue samples were collected from maternal lines (Tsaiya ducks) and intergeneric hybrid ducks mule ducks, respectively. The genomic DNA was extracted using a standard phenol/chloroform method and the concentration of purified DNA was measured with a spectrophotometer (NanoVue PlusTM, GE Healthcare, UK). Stock DNA samples were then stored at -20 °C for analysis of polymorphisms of the EDNRB2 gene.
Each duck formed an experimental unit for the association analysis of the EDNRB2 SNP with plumage colour. Chi-square test (χ 2 ) was used to determine a significant difference in potential associations between EDNRB2 SNP and plumage colour using SAS software (version 9.2.). Chi-square test of independence was performed following null and alternative hypotheses H0 (independent, no association) and H1 (not independent, association), respectively. The formula of chi-square test was: with degrees of freedom (r -1)(c -1) Where o and e represent observed and expected frequency, and r and c are the number of rows and columns of the contingency table. P values lower than 0.05 were considered statistically significant. The effect of maternal lines and c.995G>A SNP on plumage colour grading in mule ducks was analysed using one-way ANOVA through the general linear model (GLM) procedure in SAS software (version 9.2.). The means of plumage colour grading were compared with the Tukey test when the probability values were significant (P <0.05).
Results and Discussion
The grading results demonstrated that the plumage colour grading of mule ducks produced from a selection population was lower than the conservation population (Table 3). Values are expressed as mean ± SD 1 Plumage colour grading: 1 -3 indicated ducks with black spots on head; 4 -7 indicated ducks with black spots on head, and a little spot on back; 8 -10 indicated ducks with black spots on head, back, and tail; 11 -15 indicated ducks from a mottled coat to pure black a-b Means within the same row with different superscripts are significantly different (P <0.05).
After screening with HRM analysis and sequencing, twelve synonymous SNPs and one non-synonymous SNP (c.995G>A, p.Arg332His) were detected in the EDNRB2 gene of Tsaiya ducks from the conservation population (Table 4). Of the eight SNPs located in the EDNRB2 gene of Muscovy drakes, five SNPs were synonymous, one was an insertion/deletion variation (c.706delins), one was nonsense variation (c.273C>G, p.Tyr91stop codon) and one led to amino acid substitution (c.995G>A, p.Arg332His) ( Table 4).
The c.273C>G SNP in female Tsaiya ducks showed a homozygous genotype CC in both populations, indicating that the maternal line was monomorph for the C allele. To further study the paternal effect of c.273C>G SNP on plumage colour grading, the pooled semen from Muscovy drakes containing genotype GC and CC in the c.273 SNP locus (Table 4) was mated with female conserved Tsaiya ducks. The results showed that black plumage in mule ducks, which were produced by crossing conserved female Tsaiya ducks with Muscovy drakes, displayed a predominantly CC genotype of c.273C>G SNP (Table 5). However, the mule ducks with GC genotype of c.273C>G SNP did not show a differential plumage colour (Table 5), indicating that no significant association between c.273C>G SNP and plumage colour was found in mule ducks. Similar to c.273C>G SNP, c.706delins SNP in female Tsaiya ducks showed a homozygous genotype DD (deletion mutations) in both the conservation population and selection population, indicating that the female population was monomorph for the D allele. Muscovy drakes contain genotype DI (deleted and insertion mutations) and DD in c.706delins SNP locus (Table 4). To examine the paternal effect of c.706delins SNP on plumage colour grading, the pooled semen from Muscovy drakes was mated with female conserved Tsaiya ducks. The results revealed that black plumage in mule ducks displayed a predominant DI genotype of c.706delins SNP, while no distinct plumage colour distribution was found in mule ducks with DD genotype of c.706delins SNP (Table 5). These findings demonstrate that c.706delins SNP in mule ducks was not associated with plumage colour. All the genotypes identified in male Muscovy drakes were GG for c.995G>A SNP, indicating that the male population was monomorph for the G allele. As expected, a normal genotype distribution and allele frequency of c.995G>A SNP were observed in the conservation population (Table 6). In the selection population, maternal c.995G>A SNP showed a homozygous genotype AA for the advantageous white plumage coloration in their offspring compared with the conservation population (Table 6). Test mating results demonstrated that the c.995G>A SNP in mule ducks produced by crossing conserved female white Tsaiya ducks with Muscovy drakes showed a genotype GA for advantage of low plumage colour grading, while mule ducks with genotype GG tended to have a higher plumage colour grading (P <0.05) (Table 7). Furthermore, the authors examined the effect of parental genotypes of c.995G>A SNP on plumage colour in mule ducks (Table 8). As expected, c.995G>A SNP in mule ducks produced by crossing selected female Tsaiya ducks with Muscovy drakes showed a genotype GA for the advantage of white plumage colour (grading 2.3) (P <0.05), whereas mule ducks produced by crossing conserved female Tsaiya ducks with Muscovy drakes showed a genotype GG for the advantage of black plumage colour (grading range from 11.4 to 12.3) (P <0.05) ( Table 8). The genotype GA of mule ducks produced by crossing conserved female Tsaiya ducks with Muscovy drakes showed a reduced plumage colour grading (average grading 6.85) compared with genotype GG of mule ducks (grading range from 11.4 to 12.3) (P <0.05) ( Table 8), indicating that A allele from parental generation contributed to determine the white plumage colour in mule ducks. Values are expressed as mean ± SD a-c Means with a common superscript do not differ at P =0.05 Japanese quail with 'panda' plumage (white coat with a few pigmented spots on the head and back) had a predominantly AA genotype of c.995G>A SNP, while wild-type plumage had a predominantly GG genotype of c.995G>A SNP (Miwa et al., 2007). The nucleotide position 995, a non-synonymous SNP (c.995G>A) in the EDNRB2 gene, results in an arginine to histidine change at amino acid position 332 (Arg332His) (Miwa et al., 2007). Similarly, the Arg332His amino acid substitution (AA genotype of c.1272G>A SNP) in EDNRB2 gene of chickens tended towards the mottled plumage phenotype (white coat with pigmented spots on the head) (Kinoshita et al., 2014). Furthermore, domestic ducks with spot phenotype (white coat with a few pigmented spots on the back) displayed a predominantly AA genotype of c.995G>A SNP, while non-spot phenotype (black plumage) displayed a predominantly GG genotype of c.995G>A SNP (Li et al., 2015). Consistently, the authors found that white plumage in Tsaiya ducks from the selection population displayed a predominantly AA genotype of c.995G>A SNP. These findings indicate that an arginine to histidine change at amino acid position 332 (Arg332His) in EDNRB2 gene is associated with pigment distribution and results in a white plumage phenotype in avian species.
Over the past few years, only two studies have reported findings on the association between the polymorphism of EDNRB2 gene and plumage colour in duck species (Li et al., 2015;Wu et al., 2017). It has been reported that the domestic ducks (Anas platyrhynchos) from a cross between white Kaiya and the white Liancheng duck with spot phenotype displayed predominant AA genotype of c.995G>A SNP (Li et al., 2015). The current result is consistent with the previous finding that AA genotype of c.995G>A SNP in Tsaiya ducks (Anas platyrhynchos) from a selection population is associated with white plumage colour. The authors further demonstrated that white plumage in mule ducks displayed predominant GA genotype of c.995G>A SNP. These results imply that maternal A allele from Tsaiya ducks highly influences white plumage coloration in its offspring. Whether the paternal A allele from Muscovy drakes also has an impact on white plumage coloration in its offspring remains to be elucidated. However, no similar SNPs and amino acid substitutions were found in EDNRB2 gene in domestic ducks (Anas platyrhynchos) and Muscovy drakes (Wu et al., 2017). These findings indicate that the EDNRB2 gene polymorphism that has associated with plumage colour in Muscovy drakes is completely different from domestic ducks (Li et al., 2015).
The expression of EDNRB2 mRNA from non-pigmented skin of chickens with mottled plumage (Arg332His) is decreased compared with the pigmented skin of chickens with mottled plumage (Kinoshita et al., 2014). Similarly, EDNRB2 mRNA levels from the skin of Japanese quail with panda plumage (Arg332His) are decreased compared with birds with wild-type plumage (Miwa et al., 2007). These results imply that the regulation of EDNRB2 mRNA expression is critical for pigmentation in the skin. However, no significant difference was found in EDNRB2 mRNA expression between the white and black feather bulbs of domestic ducks (Li et al., 2012). Therefore, whether the AA genotype of c.995G>A SNP in Tsaiya ducks and GA genotype of c.995G>A SNP in mule ducks correlate with EDNRB2 mRNA levels in skin needs further study.
In this study, the authors demonstrated that maternal A allele of c.995G>A SNP from a selection or conservation population is a critical factor in the plumage colour of mule ducks. Mule ducks with GA genotype of c.995G>A SNP, which is produced by crossing selected female Tsaiya ducks and Muscovy drakes, displayed a low plumage colour grading (Grading 2.3). However, mule ducks with GA genotype of c.995G>A SNP that were produced by crossing conserved female Tsaiya ducks and Muscovy drakes displayed a middle plumage colour grading (grading ranged from 6.8 to 6.9). Several plumage colour mutants of ducks are produced by the combined effect of controlled breeding and selection pressures from domestication (Gong et al., 2010). Taken together, these findings imply that other SNPs that are associated with the plumage colour of mule ducks in selected Tsaiya ducks may be simultaneously selected by traditional breeding. It also demonstrates the traditional breeding strategy for improving plumage colour of mule ducks is highly reliable.
Conclusion
The c.995G>A amino acid substitution in EDNRB2 gene of mule ducks has a significant association with white plumage. The current findings provide a novel insight into the relationship between the maternal EDNRB2 gene polymorphism and plumage colour in intergeneric hybrid ducks.
|
2020-07-09T09:14:13.804Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9d6791fa03f75dd65c7047f3b767adc41ba6dd99",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/sajas/article/download/197115/185970",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2aa17ff258926a35f6a685fd19c7bd37a54c224f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
230819251
|
pes2o/s2orc
|
v3-fos-license
|
Probabilistic models of biological enzymatic polymerization
In this study, hierarchies of probabilistic models are evaluated for their ability to characterize the untemplated addition of adenine and uracil to the 3’ ends of mitochondrial mRNAs of the human pathogen Trypanosoma brucei, and for their generative abilities to reproduce populations of these untemplated adenine/uridine “tails”. We determined the most ideal Hidden Markov Models (HMMs) for this biological system. While our HMMs were not able to generatively reproduce the length distribution of the tails, they fared better in reproducing nucleotide composition aspects of the tail populations. The HMMs robustly identified distinct states of nucleotide addition that correlate to experimentally verified tail nucleotide composition differences. However they also identified a surprising subclass of tails among the ND1 gene transcript populations that is unexpected given the current idea of sequential enzymatic action of untemplated tail addition in this system. Therefore, these models can not only be utilized to reflect biological states that we already know about, they can also identify hypotheses to be experimentally tested. Finally, our HMMs supplied a way to correct a portion of the sequencing errors present in our data. Importantly, these models constitute rare simple pedagogical examples of applied bioinformatic HMMs, due to their binary emissions.
Introduction
In this paper, the framework of Hidden Markov Models (HMMs) was applied to an interesting data set from molecular biology. Our analysis had two major purposes. We wanted to identify strengths and weaknesses of HMMs in discovery and predictive roles for this specific dataset, and highlight the pedagogical utility of this dataset in teaching and exploring HMMs. A HMM is a probabilistic model consisting of a set of 'hidden' states which stochastically transition between each other with fixed transition probabilities; each state also stochastically emits observables with fixed emission probabilities. The hidden states are generated through a Markov chain process, with observables chosen randomly according to a state-specific distribution of probabilities. Historically one of the first uses of HMMs was in the field of speech recognition, and more generally language processing. Indeed Markov himself first used the simpler a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Markov chain formalism to explore patterns of vowels and consonants in Pushkin's grand poem Eugene Onegin. These applications, while very interesting, are also very complex and are essentially impossible to analyze by hand. With the advent of computers, efficient algorithms for training a HMM on data (the Baum-Welch algorithm) and determining the most probable sequence of hidden states through a HMM given an emission symbol sequence (the Viterbi algorithm) were developed in the 1960s.
In the 1980s and 1990s the use of HMMs exploded in popularity in bioinformatics applications, mainly for analyzing DNA or protein sequence data. A classic example is the GENSCAN gene-finding algorithm of Burge and Karlin [1], which is also quite complex. The introductory examples of HMMs in bioinformatics textbooks are usually quite artificial because of the complexity of emission variables in most natural examples in molecular biology. The problem we consider here has only two emission observables, and so provides a novel and non-contrived setting for simple HMMs with real biological content. The famous text of Durbin, Eddy, Krogh, and Mitchison [2] uses the occurrence of CG islands to introduce HMMs, but even this relatively simple model requires a minimum of eight hidden states and four emission types. In contrast, the most complicated models we show here have six states and only nucleotides adenine (A) and uridine (U) as emission types.
The mRNA of almost all eukaryotes is modified from its original transcribed sequence by non-templated nucleotide addition (typically polyadenylation, or addition of successive adenosines (As)) to the 3' end. This occurs in both cell nuclei and in organelles that possess independent genomes. The length of a 3' poly-A tail varies greatly among species and transcripts. In humans most non-mitochondrial transcripts have tails of 40-80 As, although the full range is 0 (for some histones, for example, which are polyuridylated [3]) to at least 250 bases [4]. Our focus is the unusual 3' addition of both A and U to mitochondrial transcripts of the human parasite Trypanosoma brucei. Some addition of U to poly-A mRNA tails has been described in other systems, for example in myxomycetes (Stemonitis flavogenita and Physarum polycephalum) [5], yeast (Saccharomyces pombe) [6], plants, and algae and plants organelles [7][8][9][10], and humans [3]. In S. pombe the uracil additions can affect degradation pathways [11]. However the complexity and role of these additions seem limited compared to those in the mitochondria of T. brucei and other Kinetoplastida. Of all natural non-templated nucleotide addition processes, those in the kinetoplasts may be the best suited to explore dual emission type HMM.
The order Kinetoplastida (NCBI taxonomic id 5653) consists of hundreds of species, some of which are heteroxenous parasites. Some insect-transmitted trypanosomes, including Trypanosoma brucei, which causes African sleeping sickness in humans and nagana in cattle, threaten the health of humans and livestock. The kinetoplastids are characterized by an unusual single mitochondrion containing an extraordinarily large amount of DNA [12] the expression of which requires multiple novel post-transcriptional events [13]. In addition to the previously-mentioned addition of non-templated tails consisting of A and U, most of the mitochondrial mRNAs undergo a targeted insertion and deletion of a few to hundreds of uracils (RNA editing) to generate a translatable sequence [14]. For transcripts undergoing editing, their identity as pre-edited or edited will influence the nucleotide composition of their tail populations. An initial extension (an in-tail) with particular characteristics including a high composition of A is initially added to each gene's transcript population. However, transcripts that do not require editing to encode their protein, or those in which editing has been completed, can acquire an extension to the in-tail and become an ex-tail with a higher composition of U. Ex-tails are not present on transcripts prior to editing [13]. An example is shown below.
. . . GCUAGG |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } templated UUUAAAAAAAAAAAAAAAAAA zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl { in-tail UAAUUAAUAAAAUUAAUAUAU zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl }|ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl { ex-tail |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } untemplated tail In T. brucei the respiratory pathway that is in part encoded in the mitochondrial genome is essential for its survival in the insect but shut off when T. brucei is in the glucose-rich bloodstream [15][16][17]. Remodeling of mitochondrial gene expression occurs as part of this transition [18]. At least some regulation of the mitochondrial transcriptome occurs at the RNA level [13], and we have previously analyzed the variation of content (nucleotide length and composition) in the 3' tail additions between life stages. We have found differences in tail composition between insect (P = procyclic) and bloodstream (B= (mammalian) bloodstream) life stages [18]. Other studiess have identified relationships between tail presence and stability [19,20], tail composition and precursor mRNA processing [20], and tails and translation [21,22].
Uridylation of Kinetoplastida mitochondrial transcripts is primarily performed by the protein KRET1 [23], and adenylation by KPAP1 [19], that are both members of protein complexes. A host of RNA binding proteins of the PPR family modulate tail addition and stability [20,22,[24][25][26], and putative enzymes such as KPAP2 [27] may also play roles. Deciphering the mechanism of and roles for 3' tail additions in T. brucei has required genetic manipulation and subsequent tracking of downstream effects such as mRNA tail composition.
This approach has proven hugely informative, but mechanisms of tail addition are clearly complex. We undertook the current study in part to determine what is additionally gained by analyzing 3' tail addition from the opposite orientation. We wished to know if features inherent in tail populations could reveal additional information about the process of untemplated addition that did not become apparent when we used a biologically-guided choice of model [18]. We also wanted to assess the capacity of HMMs for predicting non-templated tail features, including computationally parsing and potentially correcting any biases inherent in the use or manipulation of Illumina sequencing data to characterize tail populations. The results of our study apply to all non-encoded nucleotide synthesis that is not restricted to singlenucleotide homopolymer addition. They also highlight a simple context for HMM that is of potential pedagogical benefit.
Statistical properties of tail populations
Our current and previous [18,28] work focuses on a dataset of tail populations from transcripts of the T. brucei mitochondrial genes CO1, CO3, and ND1 collected from both the procyclic and bloodstream life stages (denoted CO1B, CO1P, CO3B, CO3P, ND1B, and ND1P henceforth). Unlike CO1 and ND1, CO3 is an edited mRNA. The CO3 tails analyzed here are derived from the pre-edited CO3 transcripts only, and thus are entirely in-tailed. CO1B transcripts also lack ex-tails because even while these mRNAs exist in low abundances in bloodstream-form cells, they are very likely not translated and indeed lack evidence of ex-tails [18]. Statistical properties of these tail sequences were presented in previous publications [18,28], including the distribution of tail lengths and adenine content. However, before aggressively evaluating the full potential for HMMs in defining and comparing these datasets, we wished to perform two additional statistical analyses, as this information could be useful in interpreting any unusual or unexpected models that we might later encounter.
It was important to determine if an early stage of tail addition influences subsequent nucleotide addition. One pattern that would suggest such a process would be if predominant nucleotide identity (A or U) of positions early in the tail differed for tails of specific lengths. An indicator of such an influence would be differences in composition of the early tail sequence between tails of different lengths. Therefore, we analyzed the positional composition of tail populations of fixed lengths from 1 to 60 nucleotides. For CO3 tails, a strong correlation between beginning with a U-rich region and tails that achieved lengths of � 50 nucleotides or more was observed. A heat map of A versus U composition for each position specific to populations of discrete tail lengths is shown in Fig 1 for all analyzed tail sets. This result suggests that the composition of nucleotides added early in tail addition can affect the number of subsequent additions.
We also analyzed a feature that is a hallmark of in-tails: a preponderance of homopolymeric additions (either A or U), in comparison to the more frequent nucleotide switching observed in ex-tails (see example in Introduction). To specifically examine in-tails we utilized the tail population datasets that lack ex-tails for this investigation (those derived from CO1B, CO3B, and CO3P) (Fig 2). We first examined differences in A and U polymer lengths between transcripts. We found that the distributions of U-homopolymers were simpler than that of the Ahomopolymers, with almost entirely monotonic decreases. Almost all U-homopolymers were less than 15 bases long for all examined tail populations. The A-homopolymer distribution was more complicated in that the distribution contains more robust, well-defined peaks of longer homopolymers. The CO3B and CO3P tail population distributions are more similar to each other than those of CO3B and CO1B. This provides additional indirect evidence that in-tail additions are regulated differentially between genes, as suggested in earlier works [18,28,29].
The previous result led us to focus predominantly on the distribution patterns of A-homopolymer rather than U-homopolymer additions within each population of tails. Initially we selected the CO1 populations for which to perform this analysis, because CO1 shows both extail addition (in the procyclic form) and virtually no ex-tails in the bloodstream form. In enumerating homopolymers, we developed an index that refers to homopolymers of both A and Originally, tail data was acquired by individually cloning and sequencing tails in limited numbers, so that only the most predominant tails were captured. From that time, in-tails have typically been described in the literature as A-homopolymers, sometimes possessing an initial addition of a short U-homopolymer, as these tails predominate. Therefore, one point of concern was that quantitation of in-tail homopolymers at the level of A/U3 and higher would be very rare relative to the A-homopolymer (A1) tails of the population, and thus their length distributions supported by very few reads. However, as the total number of tails analyzed was so high, this was not the case. For example, over 58,000 tails were used to compute the A5 value for CO1P tail population.
We found that the length of the first A-homopolymer can vary from later homopolymers. Specifically, the initial A-homopolymer (A1 or A2) is considerably longer in the CO1 gene in both life stages, as shown by the peaks around 20-25 bases in Fig 3. This result might reflect that initial A-homopolymer addition is in a different biological context than later in-tail additions. Interestingly, the distributions of the initial (A1) homopolymer in other genes (CO3P and ND1) in either life stage do not exhibit a trend similar to the CO1 A1 homopolymer. These transcript tail A1 datasets do not show spikes of longer homopolymers in Fig 4 (solid lines).
We hypothesized that the cluster of CO1 A1 homopolymers in the range of 15-30 nt in length may be largely comprised of tails containing no U (exclusively poly(A)) that are traditionally considered the initially added tail in trypanosome mitochondria. To determine this, we analyzed only sequences with no Us in them (i.e. there is no U1 state; a poly(A) tail) and compared the distribution of A-homopolymer lengths to those of all A1 homopolymers. We did this for all transcript sample sets to determine whether this is true only of CO1 tail populations or is a larger trend most visible among the CO1 tails. The length distributions are strikingly different, as the poly(A)-only tails appear much more precisely length-controlled than is A addition in the context of co-occurring oligouridylation (Fig 4). While this is most obvious for CO1 populations, it is also evident to a lesser extent for the ND1P tail population. Because of the difference in homopolymer lengths between the A1 and later A-additions, and the higher length control exercised on a poly(A) tail, in our final models (described in Section 5, below) we added a separate progressive state ('A-only state') that only adds the initial A1, which we will describe at that time. Also uncovered by this methodology are a longer To provide a sense of the differences between the tail sequences that we seek to capture in our models, we show below three randomly selected sequences from each tail population, drawn from the population of sequences that only appear on mRNAs of that specific gene and life-stage. CO3B
Performance of biological system-informed HMMs of increasing complexity
In previous work [18] we categorized in-tails and ex-tails by using a HMM of one discrete complexity level. Our previous HMM [18] (in this text referred to as model B5, shown in Fig 5) contains five nucleotide-adding hidden states; a single one of those hidden states corresponds to the ex-tail addition. For a given sequence the Viterbi method was used to determine the most likely path through the model, and if the state path contained the ex-tail state the corresponding nucleotides were considered to be part of the ex-tail. Our 5-state HMM worked well for that categorization purpose, but in order to explore the generative ability of HMMs as applied to these datasets and potentially identify unexpected states, we considered HMMs of a range of complexities. The simplest possible model with which to analyze our datasets is shown in Fig 6. A single state emits either an A or a U, with the emission probabilities determined by the training data (the silent Begin/End states are ignored in our state counts). We will refer to this as Model B1 (Beginning model 1). Model B1 has 2 independent parameters: one determines the transition probabilities (p and 1-p in our figure) out of the emittive state, and the other determines the proportion of A versus U in the emissions. The hidden states that emit both A and U can be interpreted biologically as a distributive or a progressive process. As shown in Fig 7, a silent S state can represent a disassociated enzymatic complex rather than a continuous process of addition shown on the left. Computationally it is simpler, and equivalent, to model the processive addition of mixed emissive states. Regardless of this flexibility, Model B1's output would be insufficient to capture the strong correlation between consecutively added nucleotides in the in-tails-i.e. the tendency to have longer homopolymers. Since the limited output of Model B1 is only theoretically sufficient to capture final ratios of A and U in the tails, we did not utilize it and examined the the next-simplest possibility instead.
A two-state model, B2, allows separate states for adding an A or U. Model B2 as shown has 5 independent parameters (Fig 8). This model would do a much better job than B1 at capturing the overall correlation between consecutively added nucleotides within the tail dataset.
PLOS ONE
However, if subsets of tails within the complete dataset possess different or changing correlations, Model B2 would be inadequate to capture this. Specific to our datasets, Model B2 cannot distinguish which states correspond to the in-tail and which states correspond to the ex-tail addition.
In contrast, Model B3 includes an additional state that would allow for modeling differences between in-tail and ex-tail characteristics if more than just the in-tail state exists for the tail population. It is the simplest possible model that can reflect dual states and has 9 independent parameters. Any tail passing through the state depicted as "A/U" in Fig 9 is considered an ex-tail. We generated a combined dataset of tail populations of the three transcripts whose tail populations should reflect ex-tailing on which to train this model. Fig 9 includes the emission probabilities after training on a combined ex-tail-containing dataset (samples CO1P, ND1B, and ND1P). In Model B3 and Model B5 described next, the ex-tail state adding both As and Us should train to have close to a 7:3 ratio, as observed experimentally [19] when such a state is biologically present. Indeed, in all samples which contain ex-tails, we found that the A/U state converges after training to a value approximating a 7:3 ratio (in some cases, the number was closer to a 2:1 ratio). In contrast, the A/U state from samples which do not contain ex-tails do not train to a similar A:U ratio. Instead, this potential A/U state usually converges an Aadding state. Tails which are modeled as passing through the ex-tail A/U state are
PLOS ONE
approximately twice as long as those modeled as consisting entirely of in-tail sequence under both our B3 and B5 models. The assignment of a nucleotide to the ex-tail state is very robust for these models; in comparing the Viterbi algorithm assignments of states for models B3 and B5, over 99% of the tails passing through the B5 A/U state passed through the B3 A/U state.
One thing that could not be captured by Model B3 is whether there is information imparted in the in-tail that influences whether or not it will become ex-tailed. The correlation of an initial U sequence with longer tails in CO3 tail populations seen in Fig 1 is not relevant here because those populations contain only in-tails. To capture the possibility of ex-tail additions being specific to some characteristic of in-tail addition, a model would need a separate path for exclusively in-tail additions. Model B5, which we previously used to quantify tail features [18], possesses this feature. We performed an analysis for each tail population to see if the sequences going though the in-tail only top set of states were any different than those passing through the in-tail to ex-tail bottom set of states. We found no statistical differences between these, in terms of overall length or overall composition, for any of the six transcript datasets. In light of this, Model B3 is nearly as sufficient as Model B5 for capturing the characteristics of these tails, at least in the populations analyzed.
The B3 and B5 models can classify in-tails and ex-tails, but we wanted to know their capacity as generative models. For each of our six tail datasets, we generated a tail length frequency profile using Models B1 through B5 trained on each respective dataset. The mean tail lengths generated from every model were very close to the observed mean length (in every case the difference in mean length was less than 5% of the standard deviation). However, these models failed in their ability to recapitulate the tail length distribution, as shown in Fig 10. In every case the standard deviation of tail lengths generated by the models is much larger than that of the observations. As neither of the relatively simple HMMs B1 or B2 performed better than Model B3 or Model B5 at matching the details of the tail length distribution, we concluded that the termination of tail elongation is not a process that is well modeled by the memoryless Markov state structure. The empirical length distribution can be forced by ad hoc mechanisms but this fails to illuminate any additional features of biological relevance.
We also examined the A and U homopolymer length distribution output by our models. For example, Fig 11 left and center panels compares the distribution of lengths of the initial (A1) A-homopolymers in our data and from model B3. The B3 model reproduced the differences in the CO1 lengths quite well-i.e. the local maxima at tails lengths of approximately 16 nucleotides, which are unique to that gene among the three studied. It also captured to some extent the presence of longer tails in the CO3 gene samples.
HMM sequence error correction
The success of the Model B3 in classifying the in-tail versus ex-tail states suggested that this model could also be used to correct for some sequencing and library-creation errors in our tail sequences. Prior to being used in training models, we had previously restricted the tail population datasets to tails only containing nucleotides A and U. Although this removed approximately 14 percent of the tails across all datasets, the datasets were so large (averaging over 700,000 tail sequences per sample type) that the tails they contained after clean-up still far exceeded the minimum number to adequately train our models and use the models to detect differences between tail populations. However, we wanted to see how our outputs would change if those eliminated tails were corrected and restored to the tail population.
CircTAIL-seq combines circular reverse transcription-polymerase chain reaction (cRT-PCR) and deep sequencing techniques, and the errors associated with each of these techniques need to be considered in the overall error rate. In our tail sequences, we have assumed that every C/G is an error. The reason for this is two-fold: (1) There have been no enzymes identified that can efficiently add C/G. While KPAP1 can inefficiently add C/Gs at high concentrations (100uM), KPAP1's affinity for As exceeds this even at much lower concentrations (1uM) [19]. (2) While the As and Us have a non-random pattern in transcripts and life-stages, and between in-and ex-tails, the C/Gs have no discernible pattern. We acknowledge that we cannot determine if any of the A/Us are cRT-PCR or sequencing errors with this method. We also recognize that every C/G may not be a cRT-PCR or sequencing error, but we cannot differentiate between technique-derived errors and cell or enzyme-derived errors. KRET1 (the U-adding enzyme) has been shown to have a slight affinity for Cs rather than Gs [30], which led us to compare error rates of overall transcripts to transcripts with high numbers of Us. While there are slightly more C errors than G errors across all transcripts (C error rate = 0.00352, G error rate = 0.00262), when we examine only transcripts that have a high U content, the C and G error rates (C errors = 0.00358-0.00332, G errors = 0.00272-0.00247) do not change. This suggests that we cannot link the addition of C to KRET1 activity. If some instances of C/Gs are in fact the actual state of the sequences inside the cell, then the error rate for our process will be lower than what we here have calculated.
To decrease errors during library creation, the PCR step is optimized in circTAIL-seq by determining the fewest number of cycles possible while still generating a product [28]. Additionally, unlike other 3' non-encoded tails, trypanosome tails are made up of heterogenous A/ U sequences, which reduces, though does not negate, the concern for error associated with long homopolymeric regions in Illumina sequencing [4,31]. The C/G error rate was 0.004 per base for tails with two or less C/Gs. We did not include the tails that had more than two C/Gs because they could represent tails that were not processed correctly and still include pieces of the transcripts from which they were derived. This C/G error rate includes 2/3 of the possible errors as each base has three incorrect options. We then multiplied the C/G error rate by 1.5 to determine an overall error rate of 0.006. Using the estimated error rates for the KAPA2G robust polymerase (5.88 � 10 −6 , KAPA Biosystems) and for MMLV polymerase (3.3 � 10 −5 [32]), and the equation supplied in [33], we found the RT-PCR step had an estimated error rate of 0.000143. Next, we investigated the error rate associated with Illumina sequencing by analyzing the error rate for the PhiX DNA that was spiked in before sequencing. This error rate was 0.0048, so we determined that the majority of the errors were Illumina sequencing related. This source of error is unavoidable, so we considered methods to overcome these errors.
12% of our sequences from analysis across all datasets contained a single C or G, corresponding to an error rate of 0.0035 per letter. We extended the emissions of Models B2 as well as Model B3 to include Cs and Gs, and then corrected the Cs and Gs based on the Viterbi algorithm's state assignment for each tail sequence. For Model B3, the corrections for the ex-tail state with approximately 70% A and 30% U emissions were randomly re-labeled with the same probabilities as the emissions. Because errors were more common in longer sequences, the main result of our corrections was an upward shift in the observed tail length distribution as shown for the three genes with transcript tails populations containing ex-tails in Fig 12. Finally, we considered attempting to correct for errors in the A and U as well as the G and C bases. However, any such correction would be much more dependent on knowledge of the true biological tail addition process, and so we deferred this until more experimental data is available.
Unstructured and ultimate HMMs
Although there is a logical, biological basis for the structure of the models we have been using thus far, by structuring them as we have we have potentially introduced some bias and missed discovering state possibilities. We therefore examined unstructured (designated with a G)) models which had no restrictions on their emissions and transitions, apart from having a single well-defined start/stop state. These models (G1-G5), with an expanded complexity range relative to Models B1-B5, are shown in Fig 13. Initial transition probabilities were chosen for each gene and life stage to match the observed A1 distribution. When training other model parameters the A-only state parameters are held fixed. Models G1 and G2 are equivalent to Models B1 and B2, although random initial values were chosen prior to training. After training on the sequencing error-corrected data (as described above in Section 4), these models provided further evidence for the lack of ex-tail states (with an approximate 7:3 A:U ratio) in the CO1B, CO3B, and CO3P samples. In Models G3 and G4, state 2 has converged to an ex-tail adding state for the CO1P, ND1B, and ND1P datasets. Interestingly, the unstructured models also suggest a surprising sub-population of tails which immediately enter an ex-tail state
PLOS ONE
without the prior addition of longer U-and A-homopolymers, which we discuss below. An unanticipated discovery such as this reveals the value of unstructured models in this work. For the unstructured models, G5, with 5 emissive states, was the highest complexity that we examined. It has 53 free parameters. They are of high enough dimension that the Baum-Welch training algorithm does not always converge to the global optimum. This increase in local optima results in a difficulty in drawing consistent inferences about the connections to actual biological states in this model. Therefore, we conclude that models with more parameters than Model G5 can have little relevance, at least in this two-observable emission system.
Finally, working off the unstructured models and the emissions that are presented in Fig 13, we added constraints that delineate the A-homopolymer only state suspected earlier to be a discrete entity (Fig 4). As it was clear that even an unstructured model identifies an ex-tail state when it exists, we customized one model for tail populations from transcripts that should be comprised of only in-tails, and one for populations consisting of both in-tails and ex-tails. The basic connectivity and model-to-biological state correspondences of our final consensus models before training (initial values) are shown in Fig 14, a 4-state model for tail populations of in-tails only and a 5-state model for populations containing ex-tails. In both, state 1 was constrained to have a self-transition matching that of a model trained on A-only tails, and state 1 is only accessible by transitions from itself or the initial (B) state. It is unclear a priori how exactly to separate the A-only state 1 and state 2 in training the models, and so it is impossible to distinguish their separate functional roles. This ambiguity corresponds to the biological question of how separate in location, composition, and regulation the process of A-only addition is from the A-and-U in-tail addition. Thus states 1 and 2 remain hybrids of the two possible biological pathways, one of which may only add As, and the other which transitions into the A-and U-adding process.
After training, the random models with these constraints converge to those shown in Fig 15. The models readily show qualitative differences in emissions of tail populations between the mRNAs and between life-stages. An exception is the tail dataset pair CO3B/CO3P, whose models have trained into very similar forms. As tails are known to play regulatory roles, this indirectly suggests that the pre-edited CO3 mRNA is less differentially regulated between lifestages.
The small numbers of U-additions present in states 1 and 2 in Fig 15 are presumably the result of sequencing and replication errors which introduced spurious Us. While it would be possible to use our trained models to remove these, circular data modification has the potential to distort and overfit it. Therefore, we did not attempt to correct these minor aberrancies.
In general these final models provide an improved fit to the data as determined by the degree to which log-likelihood per emission matches actual values obtained for tail populations. For example, compared to our model B5 the final models improve the log-likelihood per emission by a mean of 1% (this is the total log-likelihood of all sequences from the forward-backward algorithm divided by the total number of nucleotides in the sequences). The most extreme difference in model emissions between states can be clearly observed in Fig 15 between CO1 bloodstream and procyclic form tail populations. Since only the CO1P tail population has tails in an ex-tail state, the CO1 models have differing total numbers of states (4 versus 5). Less visually obvious is that both CO1 samples have the largest proportion of A-only state 1. This is detectable in Fig 15 as the narrower line representing a lower transition probability from the beginning state B to the U-adding state 3 in CO1 models compared to ND1 and CO3. With a lower probability of tails initiating with U, it follows that a higher proportion of tails will feed into the predominantly A-only state 1 (slightly thicker line for CO1).
Additional ways to view the accuracy of final model versus prior model emissions are to compare plots generated with the model outputs with plots of the actual training data. In this and previous work we have plotted directly from the data such features as tail length profiles, homopolymer composition across the tail lengths, overall U and A homopolymer profiles, and the homopolymer profile of the first A homopolymer in tails initiating with "A" (A1). The latter metric is a simple one for which we have already compared the training data set with model B3 emissions in Fig 11 (left two panels). We therefore decided to select this metric to compare the relative abilities of the B2 and final model emissions to predict A1 homopolymer length profiles for the tail populations. The A1 homopolymer length is a parameter capturing both compositional and length data, so it seemed a relevant tool to compare models. A Fig 15. Post-training final models. Post-training model topologies and emissions for the best unstructured models for each tail dataset. The three models in the top row are in-tail only models, while the three bottom row models include ex-tails. The areas covered by the separate colors in each state circle are proportional to their emissions: uncolored circle labeled 'B' indicates the beginning/end state, red is a single adenine addition, and blue is a single uracil addition. Line thickness indicates transition probability, with thicker arrows indicating higher probability and thinner arrows indicated lower probability.
https://doi.org/10.1371/journal.pone.0244858.g015 significant improvement is seen for shorter homopolymers. However, the ends of the distributions are not dramatically better, and in general the HMM framework is not optimal for modeling the termination of nucleotide addition. This modeling difficulty suggests that a separate biological mechanism may exist for termination. The same problem exists for the B3 models. When taken in sum, the shape of the A1 length profile curves of the final model (Fig 11 right panel) more closely align in shape and amplitude with those of the training data than those of the B3 model. Some discrepancies, however, still exist.
Finally, ND1 tail population emissions best exhibit the unexpected feature of immediate extail states (state 5), particularly in ND1P tails where the transition probability from the initial 'B' state to state 5 is 0.045. We consider this to be an additional advantage of the final models. Tails evidently entirely generated in an ex-tail state may reflect a real difference in regulation. In other words, some ND1 transcripts in the procyclic life stage may bypass the expected intail addition stages.
Conclusion and future application
The 3' mRNA tail addition system in T. brucei mitochondria provides an excellent opportunity to study the application of probabilistic modeling to elucidating genetic and biochemical details of a complex system. Sequence data from this system consists of binary strings which are relatively simple to characterize. Structured HMMs with small numbers (2-4) of hidden states performed well at state classification tasks on these datasets, but failed as generative models. The unstructured state models provided new, testable hypotheses on more subtle variations in tail addition that should correspond to distinct and yet unidentified biological states. For example, we may hypothesize that these variations reflect subtle functional changes to enzymatic or regulatory proteins that result from post-translational modifications, or changes in composition to protein or RNA-protein complexes. Additionally, our added constraints to the unstructured models to reflect the A-only addition pathway more clearly revealed the differences between the datasets in both state transition probabilities and relative nucleotide compositions that define each state post-training. The clarity of modeling output shown here for trypanosome mitochondrial mRNA tail addition demonstrates why it could serve as a realworld introduction to the application of HMM for biological systems in a most simplified form.
|
2021-01-07T09:04:49.651Z
|
2021-01-06T00:00:00.000
|
{
"year": 2021,
"sha1": "1d81c4904094439459fe3b55236e4ea57ef232ad",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0244858&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a0347e0df1fcec5ec4c547845b29a697e637c63",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264065119
|
pes2o/s2orc
|
v3-fos-license
|
Recent Advances in Flexible Foam Pressure Sensors: Manufacturing, Characterization, and Applications – a Review
Abstract The dramatic changes the world and the expectations of potential users have pushed researchers working in flexible foam pressure sensor (FFPS) fabrication to develop more affordable and high-performance materials. Among various materials, polymer foamed-based nanocomposites are a preferred choice due to their excellent mechanical properties, good chemical properties, and easy control. Moreover, the use of nanofillers such as carbon nanotubes (CNTs), carbon black (CB), and graphene in the polymer matrix has greatly improved the properties of the sensors. Therefore, this review focuses on the recent advances in FFPS by using different types of nanofillers in shape and structure. Accordingly, developments in the fabrication of FFPS, including dip coating, spray coating, sputtering, and in situ polymerization are also discussed. Special attention has been paid to identifying the underlying mechanism to maximize pressure sensing and improve the performance of FFPS. Suggestions for future developments in the area of sensing devices applied in health monitoring are also presented.
Introduction
[3][4][5] Thanks to these outstanding properties of FFPS, they can be used as an electronic skin (e-skin) for biomimetic robots and prostheses, as well as to monitor bio-signals for humans. [6][19] The capacitive FFPS measures the change in capacitance under pressure by inserting a flexible dielectric material between two flexible conductive plates. [20,21]The piezoelectric and piezoresistive FFPS are based on a change in resistance or voltage when pressure is applied to them. [22]arious processes are developed and applied to prepare FFPS, such as spray coating, [23,24] dip coating, [19] sputtering, [25] and in situ polymerization. [26]These processes include the use of a conductive nanofiller combined with a foamed structure to create the sensor.The main differences between these methods are the dispersion quality of the nanofiller and the scalability, simplicity, and cost of the method.The nanofiller acts as a conductive layer that senses the change in electrical signal when pressure is applied.The foam structure serves as a supporting substrate for the nanofiller which compresses elastically under pressure.The flexible pressure sensors are classified according to their substrate structure: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). [27]1D pressure sensor typically consists of a substrate made of a yarn of elastic rubber or elastic fibers to which conductive fillers are attached by coating or immersion.For example, Gong et al. [28] prepared a pressure sensor by coating spandex yarn with MXene and polydopamine (PDA)/Ni 2þ .The result showed that the sensor has good pressure sensitivity and is suitable for health monitoring applications.2D sensors are usually made from thin-film substrates coated with a conductive filler.Tao et al. prepared a paper-graphene pressure sensor, and it was found that the sensor can detect in a range of up to 20 kPa and good pressure sensitivity of 0-2 kPa −1 was also achieved.However, 1D and 2D sensors usually have a low-pressure detection limit and cannot sense very low-pressure signals. [27]On the other hand, 3D foam substrates such as polydimethylsiloxane (PDMS), melamine foam (MF), polyimide foam (PF), and polyurethane (PU) have shown better sensing performance.This is due to the unique 3D microstructure and superior elasticity of the substrate. [29]Moreover, the highly conductive foam with a porous structure is more easily deformable and sensitive to pressure.
[59][60][61] The main objective of this review is to provide a comprehensive overview of flexible foam-based pressure sensors.The sensing mechanisms of the pressure sensors will be presented and a brief introduction to the foam structure used as a supporting substrate will also be included.Nanofillers commonly used as conductive layers will be discussed and the preparation of FFPS systems will be summarized.The evaluation of the pressure sensor and the factors affecting its performance will be described, and strategies to improve the performance of sensors are also presented.Furthermore, applications of FFPS, such as monitoring bio-signals on electronic skin and in humans will also be discussed.
Pressure sensing mechanisms
As mentioned earlier, the sensing mechanism of FFPS is divided into piezoresistive, [4,62] capacitive, [12,63,64] and piezoelectric, [15] types.The response of the sensing mechanism differs depending on the type of materials that make up the sensors' structure.The operating principle of the piezoresistive sensor is based on the change in electrical resistance in response to the application of pressure (Fig. 1a-c).When an external force presses on a piezoresistive sensor and causes it to change shape, the distance between the conductive fillers decreases.This creates more conductive paths, resulting in a reduction of the electrical resistance in the sensor (Fig. 1a).The pressure sensitivity of the piezoresistive pressure (S pr ) sensor is calculated by measuring the slope of the curve of the relative resistance changes versus the applied pressure: where the relative resistance change is determined as is the resistance while the sensor is under pressure, and DP is the varying pressure in Pa.
On the other hand, the operating principle of a capacitive pressure sensor is to detect the change in capacitance when pressure is applied to it. [65]A capacitive pressure sensor is made up of a dielectric layer sandwiched between two parallel electrodes.The capacitance is calculated as follows: where e 0 is the permittivity of vacuum, o r is the relative permittivity, A is the area between the electrodes, and d is the distance between the electrodes.[68] The pressure sensitivity of such capacitive type of pressure sensors (S c ) can be calculated according to the following equation: where c o is the initial capacitance c is the capacitance when pressure is applied.Piezoelectricity means that materials generate electrical voltage when pressure is applied to them.The FFPS piezoelectricity type takes advantage of this effect by measuring the change in voltage as a function of the applied pressure (Fig. 1c).When pressure is applied, the piezoelectric sensor deforms, and polarization occurs.At the same time, negative and positive charges accumulate on their opposing surfaces and convert the mechanical pressure into an electrical signal. [69]The pressure sensitivity of such systems can be determined by using the following equation: where DI is the change of current.
Supporting substrate
][78] PDMS is prepared from dimethyldichlorosilane which reacts with water.It can also be prepared by a reaction between hexamethylcyclotrisiloxane and octamethylcyclotetrasiloxane in the presence of acidic or basic catalysts. [79][82] However, two drawbacks limit its use in FFPS.First, the production of PDMS foam requires additional processes to insert pores into the structure of the material.This can be achieved by various methods, such as direct templating, [83,84] emulsion templating, [85,86] gas forming, [87,88] and 3D printing. [89,90]Second, in the production of polydimethylsiloxane, it's difficult to control the structure because it's difficult to manipulate the ratio of reactants.MF is a flexible polymer with low density and high porosity that can be prepared from a formaldehyde-melamine-sodium bisulfite copolymer. [91]MF has good mechanical properties when pressure is applied to it; it compresses and returns to its original shape when pressure is released. [92]However, MF has poor thermal properties and can decompose at low temperatures. [93]PF has good thermal properties such as thermostability and flame retardancy.It also has excellent mechanical properties such as flexibility and tensile strength. [94,95]The powdered forming technique is the most widely used and typical process for the preparation of polyimide foam. [96]This includes the following two steps: (1) the polymerization of benzophenone tetracarboxylic dianhydride and oxydianiline with a mixed solution of tetrahydrofuran and methanol; and (2) the imidization process and thermal forming.The chemical and physical properties of PF depend on many factors, such as foam structure, chemical composition, and density. [97,98]Nonetheless, compared to conventional substrates, the fabrication of PF-based sensors may require additional processing steps or challenges, which can increase the complexity and cost of manufacturing. [95]][101] Therefore, PU could be compressed up to 85% without any plastic deformation. [102]PU foam is manufactured easily by the reaction of polyol, isocyanate, and blowing agents in the presence of a catalyst. [103]Foaming of PU can be initiated by the reaction of isocyanate and water (blowing agent), which leads to CO 2 formation and turns the polymer into foam.One of the most interesting features of PU is that its properties can be controlled by controlling the ratio of reactants.The isocyanate is the so-called hard segment, while the polyol is the soft segment of polyurethane.Thus, by changing their ratio, the structures and properties of PU can be controlled. [104]The degree of phase separation of the hard and soft segments and the content of the hard segment in turn affect the mechanical and physical properties. [105,106]However, like other types of foam, PU foam is typically porous and allows liquid or gas penetration.This permeability can affect the performance of nanofillers, compromising sensor reliability. [107]he physical and chemical properties of the above-discussed supporting substrates were compared (Table 1).It can be seen that PU is the best candidate for pressure sensors compared to other polymer foam structures because the polyurethane foam structure can be easily controlled by controlling the ratio of reactants during the synthesis.In addition, the ability to produce PU with a wide range of Young's modulus and high compressive strength allows it to be used in a wide range of pressure levels and applications.In addition, PU can be made both hydrophilic and hydrophobic by selecting appropriate polyols and isocyanates. [141]The PU is produced with a higher polyol content tends to be more hydrophilic.Isocyanates are more reactive; they are responsible for polymerization process and the stiffness of the PU.Hence, an excessive amount of isocyanate leads to a stiffer and more hydrophobic PU. [141] Therefore, the properties of PU can be easily finetuned according to the properties (e.g.hydrophilicity) of the applied nanofiller.
Hydrophobic
[149] The 0D group includes the materials that have all dimensions at the nanoscale, such as silver nanoparticles (AgNP), and CB. [150,151]154] The 1D group represents those nanomaterials that are on the nanoscale in two dimensions, such as silver nanowires (AgNW) and CNTs.They have remarkably high aspect ratios [155] and are most commonly used in FFPS, because of their excellent electrical properties and unique fibrous structure. [6,156]The 2D group includes structures that have two dimensions outside of the nanometric size range and the atomic-scale thickness.[159][160] MXene-based FFPS showed high sensitivity, wide pressure range detection, and good strain factor.Among the aforementioned nanofillers, graphene is an attractive candidate for FFPS fabrication due to its excellent electrical conductivity, large surface area, and exceptional mechanical properties (Table 2).It has an electrical conductivity 99.94% higher than that of silver and Young's modulus 75% higher than that of CNT.In addition, graphene has a 94% bigger surface area than CB.These properties enable graphene to reach the electrical conductivity percolation threshold of FFPS at lower loading. [178]However, the large surface area [179][180][181] and the strong Van der Waals force [182,183] lead to the agglomeration of graphene in the polymer matrix.Therefore, the improvement of FFPS performance by the addition of GNP is only possible if it is uniformly dispersed in the polymer matrix.One of the most effective strategies to overcome agglomeration and improve the dispersion of graphene is by combining different morphologies, shapes, and structures of nanofillers. [184,185]CNTs are the most suitable candidates to use with graphene as a hybrid nanofiller because the 1D CNTs can form a bridge with the 2D GNPs and thus, prevent the aggregation of GNPs, and improve the dispersion state of the nanofiller in the polymer matrix. [186]The synergistic enhancement of hybrid graphene-CNTs can be described as a 3D hybrid structure.Such phenomena improve the dispersion and prevent graphene platelets from agglomerating.Simultaneously, a large surface area is created between graphene sheets and nanotubes which improves the electrical conductivity.In addition, the CNTs can act as extended tentacles for the 3D hybrid structure and interlock with the polymer matrix chain, leading to better interaction between nanotubes and graphene. [179,187]
Advantage Limitation
Ref.
Spray coating
Speed process, low risk, ultra-thick coatings, no phase changes, no oxidation, minimum thermal input to the substrate.
Inappropriate for complex shapes, the sprayed material is subjected to plastic deformation, which leads to a reduction in the ductility of the coating layers [188-190] Sputtering The process can be performed at low temperature, no change in microstructure, uniform coating thickness, good adhesion, ecofriendly Slow process (typically less than 300 nm/min), surface treatment of substrate required, and high equipment cost.[191,192] In-situ polymerization Improve the agglomeration of the nanoparticles and at the same time ensure excellent dispersion in the polymer matrix.
Properties of final product can be affected by unreacted substance of the in-situ reaction [193] Dip coating Simple, cost-effective, practical, and applicable for different scale sizes.Easily control the electrical resistance of the pressure sensor by manipulation of the nanofiller loading and the number of dip coatings.
Considering the polarity of the solvent, nanofiller, and substrate is necessary to ensure good dispersion of the nanofiller on the substrate.[194,195] POLYMER REVIEWS foamed PDMS. [197]Spray coating involves applying a thin layer of a material to a substrate by spraying a suspension liquid or solution of the material onto the surface.This approach is often used to produce FFPS as it is a relatively simple and scalable process.The coating liquid is sprayed onto the supporting substrate such as PDMS using a spray gun or other spray device.The spray parameters, such as the spray rate and distance, are precisely controlled to ensure that an even coating is applied. [197,198]During in-situ nanocomposite polymerization, nanoparticles are dispersed in a monomeric solution and then, polymerization is carried out in the presence of the dispersed nanoparticles. [199]Hu et al. [200] describe the preparation of a polyurethane-based conductive sponge by coating it with silver nanoparticles after in-situ synthesis of poly(3,4-ethylenedioxythiophene) (PEDOT) on the backbone of a PU foam.Silver nitride is then dissolved in deionized water and the PU sponge is immersed in the silver nitride solution for 5 min, followed by drying at 60 � C for 6 h to obtain a PU/PEDOT-Ag pressure sensor (Fig. 3b).Sputtering describes the ejection of atoms from a target by using high energy.Subsequently, the ejected atoms are deposited on the substrate material. [201]his technique was used to fabricate FFPS by sputtering gold nanoparticles (AuNP) onto PU foam (Fig. 3c). [25]In dip coating, a conductive nanofiller is dispersed in a suitable solvent and then, the foamed structure is immersed into the solution.Finally, the solvent is removed in an oven to obtain a dispersed nanofiller on the foamed structure (Fig. 3d). [202]By comparing the methods, it can be seen that the dip coating is the most suitable for the fabrication of FFPS because it is the most cost-effective, the final sensor can be used directly, no special equipment or post-processing is required, and the substrates can be fully covered with the applied nanofiller (Table 3). .
range of nanofiller types and concentrations can be used, which is a very efficient method for mass production.
Evaluation of the performance of the pressure sensor
The piezoresistive pressure sensor detects the variation in electrical resistance of the sensor in response to deformations such as bending, compression, and torsion.To verify the performance of the sensor, various parameters such as resistance change (DR/R o ), strain, gauge factor (GF), and pressure sensitivity (S) are measured (Fig. 4).Three mechanisms are behind the sensing of FFPS: connection nano-gap, micro-gap, and fracture surface of the conductive layers (Fig. 5a-c).The nano-and microcrack mechanism occurred when a low and medium range was applied (Fig. 5a,b), [203] while in the case of high-pressure range, the contact area between the pores of the substrate and the fracture surface is a decisive factor in the measuring of S and GF (Fig. 5c).
S and GF are important factors in testing the performance of FFPS.Various strategies have been employed to increase the sensitivity like using a substrate with a low elastic modulus.Such a system has a high deformation capacity, which results in an increased ability to detect low pressure/strain values.[206] Another method to improve S and GF is to create a rough surface in the conductive layers via the pre-strain method.Yang et al. [76] introduced a microcrack on PU foam structure by applying the pre-strain technique (Fig. 6).Since they compressed the PU foam which is followed by a dip coating with graphene oxide (GO).After that, the compression load was released to create rough structures like micro-wrinkles and micro cracks on the conductive layers of the GO.The microcracks and micro-wrinkles act as microswitches that increase S and GF when pressure/strain is applied.
A new technique was used by Ma et al. to improve S and GF by increasing the number of dip cycles (1, 3, 5) in the dip coating process (Fig. 7).The results showed that as the number of dip coating cycles increased, the GF increased due to decreased resistance (R o ), and the relative resistance change (DR/R 0 ) also increased (Fig. 7a).Thus, more dip-coating cycles caused a significant increase in electrical conductivity (Fig. 7b).This phenomenon could be related to the formation of more conductive nanofiller networks.Unlikely, the pressure sensitivity of the sensor with 3 dip coating cycles is higher than that with 1 dip coating cycle (Fig. 7c).However, the sensor with 5 dip coating cycles showed lower pressure sensitivity compared to the sensor with 3 dip coating cycles.This could be related to the fact that the sensor with 5 dip coating cycles has a higher Young's modulus which results in less deformation of the sensor and consequently less change in resistance at the same compression pressure. [203,208]ang et al. [209] used a different approach to improve the sensitivity of their CNT/PU pressure sensor by using different CNT loadings (2-10) wt.% (Fig. 8a-c).They achieved the highest S value at 4 wt.% of CNT because the sensor with lower CNT content has high initial resistance, which leads to lower sensitivity.In turn, sensors with more than 4 wt.% had high stiffness, which led to less deformation of the sensor and consequently less change in resistance with pressure application.
Additionally, Nabeel et al. [210] used another method to enhance S and GF by increasing the pore volume of the substrate (Fig. 9a-c).A PU/CNT system was developed including PU with three different pore volumes.The results showed that a sample with a larger pore volume had a lower initial electrical resistance.This phenomenon could be related to the fact that the larger the pore volume, the smaller the entire PU scaffold.Consequently, more CNTs are interconnected, resulting in more conducting paths and a larger effective conducting area throughout the PU scaffold.In addition, the low initial resistance of PU leads to a big change in electrical resistance in response to pressure/strain, resulting in a high value of S and GF. [210]The higher the total pore volume, the more wrinkles are formed and thus, a lower total PU and density are achieved (Fig. 9d-i).Therefore, more interconnected CNTs lead to an increase in wrinkles and burrs.The burrs and wrinkles work as "microswitches" which can modulate the electrical resistance. [204]In addition, the surface of the sample with a higher pore volume is richer and covered with more carbon nanotubes (Fig. 9e-i).gure 8.The effect of carbon nanotube (CNT) loading wt.% on the properties of CNT/PU pressure sensor: (a) gauge factor (GF), (b) electrical conductivity, and (c) sensitivity (S). [209]he cyclic load test is applied to determine the durability and repeatability of FFPS which will also indicate the lifetime of the system. [211]In the cyclic test, the electrical resistance rate is measured in response to repeated loading and unloading of the pressure which is performed to check the adhesion and bond strength between the nanofiller and substrate.Zhu et al. [48] synthesized an AgNW/PU pressure sensor and investigated the stability of the sensor under 2300 cycles in the pressure range of 2 kPa (Fig. 10a).The results showed that the amplitude of the peaks and the waveform were almost identical, indicating that the fabricated sensor has good stability and durability.Zhai et al. [212] prepared a sensor and subjected it to various cyclic compressive loads (15, 30, 60, 80%) (Fig. 10b-e).It was found that the sensor had excellent repeatability due to the good electrical properties of the applied CB and the excellent flexibility of the PU substrate.However, the relative change in resistance at 80% strain has a small drift in the first few cycles but soon becomes identical again.
In the case of the ideal FFPS, the peak amplitude and the waveform of the resistance signal remained unchanged during the cyclic test. [205]However, slight fluctuations in resistance rate peaks amplitude as a result of various reasons are possible.Due to the viscoelastic properties of the foam polymer matrix, which change its structure to withstand the applied pressure, it takes a long time to recover. [39]In addition, the fracture of less stable conductive nanofiller layers during cyclic pressure could also cause such fluctuations. [76]Moreover, the slight difference in the contact area of the conductive layers, even if the pressure is the same could also lead to fluctuations.
The desirable FFPS should have high sensitivity, linearity, and monotonicity.According to previous works, [99,213,214] the cyclic load behavior was not monotonic d-e), sample 2 (f-g), and sample 3 (h-i). [210]anner i.e., at maximum pressure, an additional peak occurs which is called a peak shoulder.This is caused by the destruction, reconstruction, and reformation of the conductive network during the cyclic load. [215]To achieve high sensor reliability, it is necessary to have a sharp main peak and eliminate the shoulder peak.The shoulder peak for 15-30% strain is much clearer than that for 5% strain.The low strain leads to weak destruction of the nanofiller network, so it can easily recover after the strain is removed. [99]There are five models of peak shoulder phenomena presented in the literature (Fig. 11). [215]everal factors can be behind the increase in the height and shape of the peak shoulder, including the viscoelastic properties of the polymer matrix, unstable conductive network, and the agglomeration of the nanoparticles. [216]To achieve well-dispersed nanoparticles, the conductive layers need to connect easily.Thus, the sensor remains [215] Figure 10.Cyclic load vs. time (a); typical cyclic load vs. time at (b) 15%, (c) 30%, (d) 60%, and (e) 80% strain. [212]table during the pressure application.On the other hand, agglomeration of the nanoparticles creates a high peak shoulder due to the separation of their layers.In addition, when the electrical conductivity exceeds the percolation threshold, the conductive layers are more stable, and then the chance is less to form a high peak shoulder.
Improving the performance of pressure sensor
One of the biggest challenges associated with FFPS is the unintentional detachment of the nanofiller from the substrate at the time of application pressure.This phenomenon could result from weak bonding between the nanofiller and the substrate like only van der Waals forces hold the nanofiller and the substrate together.Many solutions have been proposed to overcome such issues and improve the performance of the sensor. [141,208,217]Nabeel et al. [218] designed a pressure sensor by coating PU with CNT and then and the coated PU was impregnated with silicone rubber (SR) to fill the pores of PU and improve the stability of the final pressure sensor (Fig. 12a).Lv et al. [217] created the GO/polypyrrole@PU pressure sensor by immersing the PU into a hydrochloric acid solution to apply a positive charge, which exerted an electrostatic attraction on the negatively charged graphene oxide layer (Fig. 12b).Then, the GO/PU was dipped into a pyrrole monomer (Py) ethanol solution to absorb the Py monomer through the GO layers.Finally, the Py monomer was polymerized by immersing it in FeCl 3 (Fig. 12b).In addition, Li et al. [208] prepared an MXene-PU pressure sensor and improved the stability of the sensor by dispersing MXene in the presence of chitosan (CS) via alternately immersing positively charged chitosan and negatively charged MXene on the PU foam.This method improves the electrostatic interaction between MXene and PU and leads to an improvement in the stability of the pressure sensor (Fig. 12c).Another strategy was used to improve the workability of the sensor by using different mixing ratios of isocyanate and polyol to prepare PU and then, dispersing CNT on the PU skeleton.The results showed that the samples with higher polyol ratios had lower electrical resistance and higher-pressure sensitivity.This is because PU with an excess of polyols tends to be more hydrophilic. [141]Isocyanates are more reactive and responsible for the polymerization process and stiffness of the PU.Hence, an excessive amount of isocyanate leads to a stiffer and more hydrophobic PU.Therefore, the sample prepared with a higher polyol content is more hydrophilic and stable when functional CNT is dispersed on it (Fig. 12d). [141]Wang et al. manufactured polymer composite foam with superhydrophobic was synthesized by adsorption of an Ag precursor in tetrahydrofuran (THF) on a rubber sponge and subsequent reduction of Ag þ ions to Ag nanoparticles (Fig. 12e).During the process of Ag þ reduction in a hydrazine solution, the rubber sponge swells and partially precipitates when treated with THF, a phenomenon known as nonsolventinduced phase separation (NIPS).The formation of a porous structure on the surface of the sponge from NIPS leads to increased surface roughness, which in turn improves the superhydrophobic properties of the material.This approach improves sensor performance by fabricating superhydrophobic foamed pressure/strain sensors through non-solvent-induced phase separation.In addition, the interaction between individual AgNPs is improved by coating them with a precipitated polymer.The results show good reliability and durability of the foamed pressure/strain sensors.Multifunctional pressure or strain [207,210,[217][218][219] sensors have high water repellency and heating effects and can be used in harsh environments such as low temperatures and high humidity. [207]Similarly, Qiang et al. present the successful synthesis of graphene oxide nanoribbons (GONR) and subsequent functionalization with silane molecules on a PU foam surface.The result of this process is the fabrication of porous composites with reduced graphene oxide nanoribbons (rGONR).These sensors exhibit remarkable properties such as superhydrophobicity, electrical conductivity, and mechanical flexibility.This study shows that the surface and physical properties of PU foam can be modified by grafting silane molecules onto GONR.The resulting changes range from insulating and hydrophobic properties to conductive and superhydrophobic properties.This approach aims to develop porous composites based on rGONRs with exceptional properties such as superhydrophobicity, electrical conductivity, and mechanical flexibility that can be used as strain sensors under harsh environmental conditions. [220]Zhai et al. fabricated CB/PDMS FFPS with the three-dimensional conductive network by using CB to decorate the porous PDMS foams by ultrasonic treatment.The CB is both inside and outside the PDMS foam structure based on strong ultrasound, which improves the stability of electrical properties and response.The excellent linear response is due to the excellent conductive channels formed on the surfaces of the high compression modulus PDMS.In addition, the response of the sensor is well maintained in water due to its excellent hydrophobic properties (water contact angle of up to 149). [46]Furthermore, Zhao et al. use a simple approach to manufacture high-performance FFPS.The approach involves the construction of a conductive/insulating/conductive sandwich-like porous foam structure (SPS) (Fig. 12f).The SPS comprises three layers: The bottom and top layers are electrically conductive rGO-coated foam nanocomposites fabricated by dip coating and chemical reduction and a middle layer of electrically insulating PU foam.The SPS sensors exhibit extreme resistance switching performance, fast response and recovery time, high sensitivity, and outstanding mechanical properties.The impregnation of the conductive graphene network in the porous middle layer results in a highly efficient transition from an insulating to a conductive state.The sensor features high sensitivity, fast response, and good mechanical stability, offering a new concept for portable electronic applications. [219]
Morphology of flexible foam pressure sensors
The study of the morphology of FFPS is important as it provides important insights into the structure of the sensor and its effects on its performance.This is key to improving the durability, sensitivity, and overall functionality of sensors and paves the way for advanced applications in various fields.The surface of the pure PDMS foam was smooth (Fig. 13a-b).Nonetheless, the wrinkly surface of the PDMS can also be seen (Fig. 13c,d), demonstrating that Ag-CNTs-rGO nanocomposites are coated on the PDMS foam substrate.The Ag-CNTs-rGO nanocomposites overlapped each other and tightly connected with the adjacent CNTs (inset of Fig. 13d), which is valuable to improving the conductivity stability of the Ag-CNTs-rGO/PDMS foam. [221]ang et al. investigated the effects of CNT loading on the electrical conductivity of CNTs/PU foam nanocomposites.The morphologies of PU foam and CNTs/PU foam nanocomposites with different CNT loadings were characterized (Fig. 14a-f).The dispersion of CNTs with concentrations of (2, 4, 6, 8, and 10) wt.%on the PU foam was different (Fig. 14b-d).Compared to pure PU foam, the conductive CNT fillers are all bound to the foam scaffold.At a CNT concentration of 2%, the CNTs are evenly arranged on the surface of the foam without tangling with each other.As the CNT concentration increases, the CNTs on the surface of the foam become entangled with each other, increasing the conductive network in space.However, when the CNT concentration is increased to 8 and 10 wt.% respectively, too many carbon nanotubes stack on top of each other, and gullies form. [209]Such phenomena are related to the percolation threshold, when the CNT loading is very low, a continuous conductive network may not form PU. However, as the concentration of CNT content increases, a crucial point is eventually reached, the so-called percolation threshold, at which an electrically conductive network begins to develop.Consequently, the electrical conductivity increases significantly.With higher CNT proportions, the probability of the CNTs aggregating or clustering increases.The presence of these clusters can hinder the development of a highly effective network for conducting electricity, leading to a reduction in electrical conductivity.
Wu et al. fabricated CB/PU pressure sensors and investigated the electrical properties under pressure as shown in Fig. 15a-e.When the CB @ PU foam was pressurized, the CNTs, (e) 8% CNTs, (f) 10% CNTs. [209]gure 13.The morphologies of Ag-CNTs-rGO/PDMS sponge.(a-b) SEM images of the PDMS foam and (e-f) Ag-CNTs-rGO/PDMS foam. [221]ending of the PU foam caused stress in the CB conductive layer.Therefore, mechanical microcracks are easily formed in the CB conductive layers (Fig. 15a,b).When the compression pressure is released, the microcracks close and form a crack joint.The 3D network foam CB @ PU before compression is intact, while it is deformed under 60% compression (Fig. 15c,d).It can be seen that some CB @PU backbones touched each other, increasing the contact area between the conductive layers.By varying the contact area between the CB @PU backbones, the CB @PU foam was able to detect large compressive loads.As the pressure on CB @PU sponges increases, the micro-crack connection in the CB layer loosens first (Fig. 15e).This causes interruptions in the local conduction paths to foam.When the pressure load steadily increases, the crack spacing and crack density increase accordingly, leading to a further decrease in the conductivity of CB layers (Fig. 15e middle).When the applied strain reaches a certain value, some CB @PU touch each other.This promotes the formation of more conductive paths (Fig. 15e right), leading to an increase in the electrical conductivity of the CB @PU foam.These two mechanisms act simultaneously and explain the pressure-sensitive behavior of the CB @PU foam, which endows CB @PU foam with the versatility to detect a wide range of deformations and high sensitivity. [222]
Application of flexible foam pressure sensors (FFPS)
FFPS with a wide pressure range, high flexibility, and excellent pressure sensitivity meet the requirements for use in e-skin and wearable applications. [223]Generally, an FFPS is attached to the human body or placed on wearable textiles to detect human activity.FFPS can be used both in the low-pressure range, such as breath and speech recognition, and in the high-pressure range, such as motion detection.Chen et al. [17] prepared .CB @PU backbones touched each other at large deformations, leading to the formation of further conductive pathways in the CB layer (right). [222]Xene/PU foam and used it for e-skin and wearable device applications.Due to the wide range of pressure detection, the sensor can recognize voices, facial movements, and hand and foot movements (Fig. 16a-e). [217]ue et al. [213] developed rGO/PU and used it as a wearable pressure sensor to detect human activity.The sensor was placed on the neck, index finger joint, wrist, elbow, shoe sole, and face (Fig. 17).The results revealed that the current intensity changes proportionally to the pressure applied during neck bending, finger movement, wrist bending, arm bending, walking movement, and facial expression.When human motion compresses the sensor, the current of the sensor increases as it responds to the deformation of the sensor and returns to its initial value when the pressure is removed.These are encouraging results that offer the possibility of use foam-based sensors in intelligent robot applications.Dai et al. [70] created a CNT/PDMS foam pressure sensor which was fitted in various locations on the human body to detect different practical body movements and physiological signs (Fig. 18a-c).The sensor is attached to the wrist to detect the peaks of the heart pulses (Fig. 18a) and it can distinguish the peaks of T (tidal), P (percussive), and D (diastolic) in each heart pulse from the steady waveforms of the radial artery pulse at a heart rate of 68 beats per minute.A FFPS was also able to detect breathing by placing it on the subject's chest to monitor respiration, which is an important physiological signal to prevent sleep apnea.Periodic breathing produces repeatable variations of DR/R 0 (Fig. 18b).In addition, the DR/R 0 of the sensor shows different and repetitive patterns when the tester pronounces "Silicon," "Hi," and "Sensor" indicating a possible application in speech recognition devices (Fig. 18c).
A simple and efficient ultrasound-assisted dip coating method for PU foam with carbonaceous nanofiller to produce conductive foam.The resulting conductive sponges exhibited various excellent properties, such as good compressibility, fast response time, and high sensitivity, which are the desired characteristics of piezoresistive sensing materials.The hybrid CNT-CB/PU foam with a ratio of CNT: CB ¼ 1:20 exhibits optimal comprehensive performance in both low-and high-pressure areas.This hybrid CNT-CB/PU foam can be developed into a piezoresistive strain sensor that can detect various human motions over a wide range of strains, such as blowing, swallowing, deep breathing, bending fingers, elbows and knees as shown in Fig. 19. [147] ment, c) wrist bending, d) arm bending, e) nodding, f) knee bending, g) facial expression of cheekbulging, and h) walking. [213] The comparison of the sensor response (blue) and the sEMG signals (red) processed with an integrated data processing system within a period of 1 s.(g) Attaching the sensor to the biceps measures the signal of muscle tension.(h) The response of the sensor to the simulated static parkinson's tremor at a frequency of 5.5 Hz with an illustration showing the placement of the sensor. [224]odified PU foam.These sensors can be used in many applications, including biomedical, sports, and robotics.The fully realized prototype polymer insole with integrated monitoring provides medical professionals with important data, such as instantaneous body weight distribution and a comprehensive representation of walking dynamics.The use of PEDOT: PSS for PU foam functionalization is a suitable approach for pressure sensor fabrication.This technology offers several advantages, including lower sensitivity to pressure variations, cost efficiency, long-term stability, and ease of fabrication.The proposed micromechanical model can be used to predict the conversion capacities of the device and adjust the parameters to achieve devices with sensitivity in the approximate range of 1 to 1000 Pa −1 . [206]hen et al. have fabricated dual-mode strain sensors based on a porous GNP foam nanocomposite for strain measurement.The dual-mode waterproof sensors can detect large, small, and even subtle movements and temperature changes both in dry conditions and underwater (Fig. 20a,b).Furthermore, the dual-mode sensor can be used in a variety of environments, such as detecting pulse waveforms beyond common locations such as the carotid artery, wrist artery, and ankle, to much more difficult locations further away from the heart (Fig. 21c).The sensor detects pulse waveforms at the eyebrow bone (green), fingertip (purple), and toe (orange) (Fig. 21d).In addition, the sensor shows a rapid response to multiple voluntary contractions and relaxation of the biceps and flexor carpi radialis and correlates the response with the surface electromyogram (sEMG) in terms of contraction times and ranges (Fig. 21e,f).Contraction of the biceps muscle produces a substantial sensor response, totaling 300 units, with variations between 250 and 350 units.The observed local peaks within the sensor response envelope are consistent with surface electromyography (sEMG) data (Fig. 21g).In addition, the sensor attached to the forearm of the human subject can also provide a fast response and is very sensitive to the simulated resting tremor in Parkinson's disease (Fig. 21h). [224]On the other hand, Sencadas et al. fabricated FFPS and used it in robotic systems by synthesizing the foam structure of poly (glycerol secabate) (PGS) and then incorporating it with CNT.Three sensors were attached to the fingers of the gripper (Fig. 22a).The status of each sensor was visually displayed using a simulated light-emitting diode (LED) and the corresponding change in resistance was plotted, as shown in Fig. 22b-d.The resistance of the individual sensors decreased as the gripper grasped the Health monitoring [231] Graphene/PDMS GF ¼ 2.67-8.7736000 / Foot wearing [232] phosphorus-gold/MF S¼ 2.047 kPa −1 1000 10 Motion sensing, including breathing, finger bending, and wrist bending.[233] Polypyrrole-PU GF ¼ 2.6 100 / Breathing detection [234] Graphene/cotton-PU S ¼ 0.8-4.44kPa−1 15 / Wrist pulse detection and walking detection [235] AgNP/ rubber foam GF ¼
4-32 3000 1400
Human motion monitoring such as wrist bending, finger bending, and elbow bending [236] Chitosan/Ink Sponges S ¼ 10.28 kPa −1 / 133 Large joint movement of the human body and the movement of muscle groups near the esophagus [237] CNT-MXene /PDMS GF ¼1939 10000 158 Human motion detection, pulse detection [238] Graphene aerogel/PDMS S ¼ 2235.84 kPa −1 1000 120 Human motion detection such as arm bending, fingers, and soles of the feet shows that the sensor has good detection ability.[239] Graphene-PU S ¼ 476 KPa −1 10000 120 Monitor the human body's pulse, the pressure on the skin, and throat swallowing.[240] CNT-GNP/PU GF ¼ 43000 3000 31 Ability to detect finger bending, wrist bending, and signal output for Morse codes "UKR" and walking.[143] Polydopamine-graphene/PU GF ¼ 3-21 40000 / Electronic skin [241] carbon nanofibers/PDMS GF ¼ 2.57-3.212500 / Body motion monitoring such as wrist bending, finger bending, and elbow bending.[242] PAC-CGO-Na hydrogel GF ¼ 6.67 / 120 Wearable electronics and human-machine interfaces [243] chitosan in-situ grafted magnetite nanoparticles GF ¼ 0.66-1.7563 / Electric skin, motion detection, and monitoring human behaviors [244] AgNP/PDMS GF ¼ 0.45-1.6 S ¼ 0.11-0.22KPa −1 8100 80 Human health monitoring, and wearable strain/pressure sensing applications [245] objects because the sensors were mechanically pressed together and the resistance of the individual sensors.The gripper can effectively grasp various objects with different shapes and textures and also can detect any form of contact between the tip of its flexible fingers and the objects to be grasped.The above results demonstrate the potential use of these sensors in soft adaptive grippers and other soft robot applications that require elastic touch sensors. [225]here are many applications of FFPS to highlight their versatility in various fields and to provide a comprehensive overview, Table 4 is presented below, detailing the pressure sensor types, their properties, and corresponding applications.
Conclusion
A detailed comparison of flexible foam pressure sensors based on a variety of compounds, manufacturing processes, performance, and applications is presented.To design FFPS with high performance, PU could be a desirable substrate due to its ability to produce pressure sensors with variable levels of stiffness and compression strength.The hydrophilicity of PU can be controlled to match the selected polarity of the nanofiller.Moreover, it can also be successfully applied with hybrid nanofillers to create FFPS systems.By assessing the various preparation methods, dip coating is the easiest and most effective way to manufacture FFPS directly without the necessity of advanced equipment.The performance of FFPS can be enhanced in terms of sensitivity and gauge factor by controlling the number of dip cycles of the substrate in nanofiller suspension, nanofiller loading, and pore size of the substrate.In addition, the durability of FFPS can also be improved by adding silicone rubber to fill the substrate pores as well as improving the bonding between the nanofiller and substrate.FFPS applicability has a huge potential in bio-signal monitoring and human activity detection.Furthermore, the use of FFPS in smart robots and human-machine interfaces is directions for future perspectives.All in all, a comprehensive review of recent advances in flexible foam pressure sensors is prepared.
Figure 4 .
Figure 4. Typical workability principle of the pressure sensor.
Figure 6 .
Figure 6.Flexible foam pressure sensors (FFPS): (a) manufacturing process, (b-c) morphology of FFPS without pre-strain, (d-e) morphology of pre-strained FFPS sample, (f) relation between change in relative current vs pressure, and (g) the relation between the change in relative current vs strain.[76]
Figure 7 .
Figure 7.The effect of the number of dip coating cycles on the (a) electrical conductivity, (b) sensitivity (S), and (c) gauge factor (GF).[207]
Figure 11 .
Figure 11.(a) Peak shoulder phenomena during cyclic loading and (b) visualization of bending as a cause of shoulder peak.[215]
Figure 15 .
Figure 15.Morphology of CB @PU foam.(a) SEM images of the microcrack joint on a CB @PU foam after compressive pretreatment, magnification: (c) 2000� and (b) 4000�.c) SEM images of an uncompressed and (d) compressed CB @PU foam, magnification: 100�.(e) Schematic evolution of conduction paths in a CB @PU foam during continuous compression deformation.The disruption of microcrack connections in the CB layer occurred at small deformations and disrupted local conductive pathways (Middle).CB @PU backbones touched each other at large deformations, leading to the formation of further conductive pathways in the CB layer (right).[222]
Figure 18 .
Figure 18.Applications of the CNT/PDMS sensor for detecting various physiological signals and monitoring human body movements: a) wrist pulse monitoring.The inset shows the pulse waveform of one cycle (right), where the P-wave, T-wave, and D-wave are visible.b) Monitoring respiration under normal conditions and after running.Inset: photo of the sensors attached to the chest.c)Detection of various acoustic stimuli when the wearer spoke "hi", "sensor", "silicon".[70]
Figure 20 .
Figure 20.Realistic and schematic representation of (a) prototype orthopedic insole with embedded pressure sensors mapping the spatial distribution of body weight, (b) representation of pressure distribution when a person stands still (c) current-time diagram of each sensor monitoring the dynamics of footstep during walking.[206]
Figure 21 .
Figure 21.(a, b) the Utilization of dual-mode waterproof sensors for the detection of small, and large strain ranges.(c) Schematic diagram of the network of human arteries showing the subtlest movements at the eyebrow bone, fingertip, and toe.(d) Identification and measurement of the pulse waveform on the eyebrow bone (green), fingertip (purple), and toe (orange).(e) the comparison between the sensory response (blue) and the muscle action (red) identified by the surface electromyogram (sEMG) in response to different voluntary relaxation/contraction cycles of the biceps muscle group, with a magnified view of the sensory response during muscle relaxation and contraction.(f)The comparison of the sensor response (blue) and the sEMG signals (red) processed with an integrated data processing system within a period of 1 s.(g) Attaching the sensor to the biceps measures the signal of muscle tension.(h) The response of the sensor to the simulated static parkinson's tremor at a frequency of 5.5 Hz with an illustration showing the placement of the sensor.[224]
Figure 22 .
Figure 22.Application of the foamed pressure sensor in soft robotic systems: (a) soft gripper with the sensors attached, (b) change in electrical resistance before, (c) during, and (d) after releasing a tennis ball.[225]
Table 4 .
Summary of the application and performance of FFPS.
|
2023-10-14T15:22:08.186Z
|
2023-10-12T00:00:00.000
|
{
"year": 2024,
"sha1": "ec8d6163bafd212d9bf57d58e225ebf78bb71e4e",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15583724.2023.2262558?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3534437b31ef6bf9d08ac52409479e3f6df160ee",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
12928107
|
pes2o/s2orc
|
v3-fos-license
|
Crystal structure of tris[μ2-bis(diphenylphosphanyl)methane-κ2 P:P′]di-μ3-bromido-trisilver(I) bromide–N,N′-phenylthiourea (1/1)
The title complex, [Ag3Br2(C25H22P2)3]Br·C7H8N2S, comprises a trinuclear [Ag3Br2(C25H22P2)3]+ unit, a Br− anion and one N,N′-dimethylthiourea molecule (ptu). Three AgI ions are linked via two μ3-bridging Br atoms, leading to a distorted triangular bipyramid with an Ag⋯Ag separation range of 3.1046 (6)–3.3556 (6) Å. The triangular Ag3 arrangement is stabilized by six P atoms from three chelating bis(diphenylphosphanyl)methane (dppm) ligands. The AgI ion presents a distorted tetrahedral coordination geometry. In the crystal, the bromide anion is connected to the ptu molecule through N—H⋯Br hydrogen bonds [graph-set motif R 2 1(6)]. Each bromide/ptu aggregate links the complex ion via C—H⋯S and C—H⋯Br hydrogen bonds, leading to the formation of a three-dimensional network. Two phenyl rings from two dppm ligands were modelled as disordered over two sites.
S1. Comment
The studies of silver(I) complexes with diphosphane has been receiving more attention (Matsumoto et al., 2001;Nicola et al., 2005;Nicola et al., 2006) because of their potential applications such as show interesting luminescence properties (Song et al., 2010;Sun et al., 2011). The coordination chemistry of silver(I) complexes with phosphorus and sulfur donor ligands, on the other hand have been of increasing interest due to their potential applications such as antimicrobial activities (Isab et al., 2010). Herein, the title complex was prepared by reacting silver (I) bromide and dppm ligand, followed by the addition of ptu in acetonitrile solvent. An unexpexted complex [Ag 3 (C 25 H 22 P 2 ) 3 (µ 3 -Br) 2 ] + unit was formed in the uncoordinated of ptu.
The title complex comprises of a trinuclear [Ag 3 (C 25 H 22 P 2 ) 3 (µ 3 -Br) 2 ] + unit, Br anion and one N,N′-dimethylthiourea molecule (ptu). The bromide anion forms a triple bridge from the both side of the Ag-3 plane leading to distorted triangular bipyramid with an Ag···Ag separation of 3.1046 (6)-3.3556 (6) Å. A triangular Ag 3 arrangement stabilized by six P atoms from three chelating dppm ligands (Fig.1). The Ag I ion presents a distorted tetrahedral coordination geometry.
S2. Experimental
Bis(diphenylphosphanyl)methane, dppm, (0.1 g, 0.26 mmol) was dissolved in 30 ml of acetonitrile at 343 K and then silver(I) bromide, AgBr, (0.05 g, 0.27 mmol) was added. The mixture was stirred for 4 hr and then N,N′-phenylthiourea, ptu, (0.04 g, 0.26 mmol) was added and the new reaction mixture was heated under reflux for 6 hr during which the precipitate gradually disappeared. The resulting clear solution was filtered and left to evaporate at room temperature. The crystalline complex, which deposited upon standing for several days, was filtered off and dried in vacuo (Mp = 490-492 K). supporting information sup-2 . E71, m89-m90
S2.1. Refinement
H atoms bonded to C and N atoms were included in calculated positions and were refined with a riding model using distances of 0.95 Å (aryl H), and U iso (H) = 1.2U eq (C); 0.99 Å (CH 2 ) and U iso (H) = 1.5U eq (C); 0.88 Å (NH), and U iso (H) = 1.2U eq (N). Two phenyl rings from two dppm ligands are disordered. The ADPs of ipso carbon atoms were constrained to be identical for each disordered pair of phenyl rings. The geometry of the minor moiety of each pair of disordered phenyl rings was restrained to be similar to that of the major moiety (within a standard deviation of 0.02 Angstroms). Carbon atoms of one phenyl ring were restrained with effective standard deviation 0.01 to have the same Uij components. To ensure satisfactory refinement the atoms of each disorder component of the phenyl rings were restrained to lie within a common plane. The overall ratio of the two components of disorder, refined with the same free variable, is 0.516:0.484 (3).
Figure 1
The molecular structure with displacement ellipsoids drawn at the 50% probability level. The minor component of disorder is omitted for clarity. The dashed lines show N-H···Br hydrogen bonds between the ptu and the bromide anion. Part of the crystal structure showing intermolecular C-H···S and C-H···Br hydrogen bonds as dashed lines, forming a three-dimensional network. where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max = 0.002 Δρ max = 1.48 e Å −3 Δρ min = −0.67 e Å −3
Special details
Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes.
|
2016-05-12T22:15:10.714Z
|
2015-03-21T00:00:00.000
|
{
"year": 2015,
"sha1": "e8f9275e10cef176a2b461b1587b4bb2a79a5de0",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2015/04/00/nk2229/nk2229.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8f9275e10cef176a2b461b1587b4bb2a79a5de0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
230663701
|
pes2o/s2orc
|
v3-fos-license
|
Mapping, Infrastructure, and Data Analysis for the Brazilian Network of Rare Diseases: Protocol for the RARASnet Observational Cohort Study
Background: A rare disease is a medical condition with low prevalence in the general population, but these can collectively affect up to 10% of the population. Thus, rare diseases have a significant impact on the health care system, and health professionals must be familiar with their diagnosis, management, and treatment. Objective: This paper aims to provide health indicators regarding the rare diseases in Brazil and to create a network of reference centers with health professionals from different regions of the country. RARASnet proposes to map, analyze, and communicate all the data regarding the infrastructure of the centers and the patients’ progress or needs. The focus of the proposed study is to provide all the technical infrastructure and analysis, following the World Health Organization and the Brazilian Ministry of Health guidelines. Methods: To build this digitized system, we will provide a security framework to assure the privacy and protection
Background
A rare disease (RD) is a medical condition with low prevalence compared to diseases prevalent in the general population, but there is no consensus on its definition [1].The European Union describes a RD as a disease that affects no more than 1 person in 2000 [2].In the United States, the Rare Disease Act of 2002 considers a RD any condition that affects less than 200,000 people across the country, or 1 person in 1500 [3].In Japan, a RD is considered to affect less than 50,000 people (ie, 1 in 2500 people) [4].On the other hand, in Latin America, there is no consensus on the definition of RDs in terms of numbers.Each country has its own definition according to its public policies, decrees, the existence of adequate treatments, or the severity of the disease [5].
Although individually rare, RDs collectively affect up to 10% of the population.Thus, rare diseases have a significant impact on health, and health professionals must be familiar with their diagnosis, management, and treatment [6].It is estimated that there are up to 8000 rare diseases identified, 80% of which are of genetic origin [7].In Brazil, the Ministry of Health defines a rare disease using the criteria of the World Health Organization (WHO), that is, 1.3 cases per 2000 individuals [8].In 2014, the Brazilian Policy of Comprehensive Care for People with Rare Diseases was established within the scope of the Unified Health System (Sistema Único de Saúde [SUS]) [9].
To overcome this informational barrier, the WHO recommends that research involving the study of the process dynamics of RDs at the national level be financed by public agencies.The formation of large multidisciplinary networks is part of a fundamental process to encourage the collaboration of medical specialists, referral centers, and patient groups [10].Providing an infrastructure for new mechanisms promoting the translation of basic research into clinically important products is still a priority.One of the most important opportunities addressed by the WHO reinforces support for "networks of excellence that focus on research infrastructures; research, infrastructure, and implementation of guidelines for medical and psychosocial care; [and] methods to provide easy access to health care available to patients, regardless of where they live" [11].
The first full version of a report relating to the concept of a health regionalized network emerged after World War I due to the consequent need for changes in the social protection system, presenting better ways for health services organization.Almost one century later, the original purpose remains very similar, as many health systems around the world offer a specific organization of services to attend to their population.A need to establish a uniform system of clinical histories was declared a crucial reason for the better integration of different health service levels [12].
The proposal to integrate health networks gained momentum with the advent of health observatories.The emergence of these epidemiological monitoring centers is related to rapid changes in the health sector, including the need to monitor and assess the impact of health programs and public health policies, the advent of informational intelligence, digital health, and health knowledge management.The health observatories' functions include locating, gathering, analyzing, synthesizing, and disseminating data on the health status of a population, in addition to establishing partnerships and contacts with other agencies involved in the health of that region [13].
Independent of which model is used, the majority of countries today have digital systems to manage their data according to their health network structures.According to WHO recommendations, although digital health interventions are not enough on their own, when combined with health professionals, they are vital tools to promote health quality [11].While data are still the main asset in the current digital world, many institutions do not yet fully understand the need and advantages of sharing their data with other organizations [14].
Cross-institutional sharing of health data is a challenge because many institutions are unwilling to share data due to privacy concerns or the fear of giving other institutions competitive advantages and, at an operational level, because of mistrust of technical barriers (there is no common platform for sharing these complex and heterogeneous data).On the other hand, overcoming this challenge may lead to better clinical effectiveness and improved clinical research [15].Even if these concerns can be beaten, there is no consensus about the exact technical infrastructure needed to support such an effort.
Studies have shown that there remain a set of meaningful hurdles to achieving the desired benefits of health care data exchange.For example, failing to secure the patient record has financial and legal consequences as well as the negative potential to impact patient care.Thus, possible repercussions of a breach are a discouragement to swapping data.To avoid ethical and legal consequences for institutions, anonymization and privacy must be ensured for sensitive data, making them available only to authorized persons.Further, data anonymity could help improve the research area by removing identifiable information and sharing only limited data [15,16].
Another significant barrier that health networks need to face is adequate technological infrastructure.In addition to the scarce and fragmented availability of data on RDs [17], several technical aspects are common, like a centralized data source, which represents a high-security risk due to the susceptibility for malicious attacks to a nonredundant authority.A secure channel to send data to other organizations is another feature that institutions must address to avoid unauthorized access [14].
Due to the complexity of data in the health domain, achieving full interoperability is a hard task.Heterogeneous structures and data diversity decrease the accuracy of analysis and reduce understanding of information.To face this issue, several entities have created standards for data exchange.However, there is no consensus on the most adequate ones [18].In Brazil, the Ministry of Health Ordinance 2073 of August 31, 2011, regulates the use of interoperability and health information standards for health information systems within the scope of the SUS; at the municipal, state, and federal levels; and in private systems and the private health sector [19].
In this sense, for there to be semantic interoperability between independent systems, it is necessary to standardize two aspects of information: the structure of the information and the semantic representation of the information.The information structure concerns the information and knowledge models that allow the systems to exchange data, formed in larger structures such as documents, correctly.The semantic representation of information includes terminologies, ontologies, and controlled vocabularies [20].
This framework can mitigate the several barriers preventing data access for research support agencies, academics, managers, and health professionals, such as the noncomputerization of processes, heterogeneity and duplicity of data in health information systems, and existence of a large amount of isolated data in databases accessible only in a certain context, usually to answer specific questions in particular research [17].
These factors often cause problems in the quality of information, making it difficult to coordinate and evaluate data in a research network linked to a rare disease patient care network, so it is not possible to use the data to assist in the decision-making process.Therefore, in health care, decision support tools are essential to guide the practice of health care and support the decisions of managers who will directly influence the quality of care provided to patients with RDs [21].This paper presents a subproject belonging to the main project, entitled the Brazilian Rare Disease Network, funded by the Brazilian Council for Scientific and Technological Development (Conselho Nacional de Desenvolvimento Científico e Tecnológico [CNPq]), with a 2-year forecast.The main project is a mixed prospective and retrospective observational cohort study to map the landscape of rare diseases in Brazil.RARASnet is responsible for infrastructure and data analysis to provide health indicators and support the construction, organization, and monitoring of this patient network.Although the objectives are collaborative, different teams are responsible for specific parts.
Related Work
RDs represent a major challenge for the organization of health care.The cooperation of health service professionals, civil society, and academia is essential to overcome this challenge.In recent years, different collaborative networking initiatives have emerged.Collaborative networks are of great value for science and technology institutions to share, generate, and disseminate new knowledge that can lead to innovations and form a solid basis for a national care network [22][23][24].In this scenario, records represent an important tool in acquiring the necessary knowledge about the clinical form and natural history of patients with RD.
Maintaining records of epidemiological data also contributes to the planning of public health programs, which in some cases requires supranational coordination.Thus, European Reference Networks (ERNs) were organized, supported by a series of rules and guidelines that provide a cohesive structure for sharing good practices of diagnosis, treatment, and standardization of recommended approaches for RDs.The promotion of ERNs contributed to the identification of already established centers of expertise and encouraged the voluntary participation of health service professionals in ERNs dedicated to specific groups of RDs [22].
Italy was one of the first European countries to develop specific RD regulations.The success of the experience in Italy is exemplified by the regional network of Piedmont and Valle d'Aosta, where networked activities have provided several benefits, such as improvement of multidisciplinary knowledge, provision of quality care, and reduced cost of therapeutic mobility [22,23].To produce epidemiological evidence on RDs and support health service policy and planning, Italy also assessed the integrity and consistency of procedures carried out from its national registry and found that the data quality still represents a limitation to any solid epidemiological estimate [22].
After comparing the outcome of patients with primary systemic amyloidosis in a referral center with the population of this same Italian network, another study in the country found that the patients observed by the network had a diagnosis 4 months earlier than those seen in the reference center.In addition to the rapid dissemination of knowledge pointed out as the main cause of this difference, important epidemiological differences were observed, which further reinforces the need for the standardization of reliable prognosis and the administration of clinical trial results [22,24].
According to the French National Plan for Rare Diseases, the first step in identifying patients with RD who are eligible for clinical trials or cohort studies is the definition of a minimum set of national data.In addition, providing reference centers with information technology (IT) tools contributes to the improvement of the service and research of RDs.Thus, according to international regulations on privacy and intellectual property and based on interoperability and semantics standards, the construction of the French model allowed data sharing in a national network composed of 131 centers specialized in RDs [22,[24][25][26].
One of these French national reference centers went further and created a web-based medical archive of pediatric interstitial lung diseases.The construction of a national database made it possible to centralize and serve various stakeholders, such as researchers, clinicians, epidemiologists, and the pharmaceutical industry.Consequently, with the increasing engagement of new participants and the creation of committees to control data quality, there was an increase in the accuracy of the information provided, and several alternative solutions, depending on local possibilities, were configured [27].Similar initiatives have also taken place in Germany [28] and the United Kingdom [29].
To not only collect records but also analyze them, a conceptual and digital framework based on the Asia-Pacific Economic Cooperation Rare Disease Action Plan has been articulated [26].A proposal for a rare disease registry and analytical platform aims to assist in clinical decision making and improve the design and delivery of health services [30].
On the other side of the ocean, the National Center for the Advancement of Translational Sciences, one of the 27 departments of the US National Institutes of Health (NIH), maintains initiatives that aim to enhance the research of rare diseases, such as the promotion of information sharing and the construction of multidisciplinary collaborations.The Rare Diseases Clinical Research Network, for example, despite being formed by distinct clinical research consortia, shares the same data coordination and management center [31].This management is only possible due to the availability of a genomic database maintained by the Genetic and Rare Diseases Information Center and the RD record program based on international standards and sharing (eg, Health Level Seven, Human Phenotype Ontology [HPO]) as well as the toolkit for the development of patient-focused therapies (National Center for Advancing Translational Sciences Toolkit), represented by an information portal with guidelines for the development process of research and partnerships with the NIH and the Food and Drug Administration [32].
Worldwide, these two actions integrated not only clinical and epidemiological data and records but also information from biorepositories of biological samples for rare biospecimens (RD-HUB) [33], and they created an integrated platform that connects databases, records, biobanks, and bioinformatics clinics for rare disease research (RD-Connect) [34].To address the quality of these data, several models and tools have been developed worldwide [34,35].An assessment approach for diagnosing rare diseases based on Unified Modeling Language and ontologies, called FindZebra, improves the quality of diagnosis compared with standard search tools [34][35][36].
To become more than a search engine, decision support systems for clinical diagnosis have incorporated artificial intelligence and natural language processing techniques to provide more accurate and useful systems [37].By having the infrastructure established according to the Ministry of Health, we can focus on the data analysis.Data science has been playing a major role in retrieving insights from patient reports and human and technical resources.Thus, the main goals of this project are described in the following section.
Objectives
The primary objective of this study is to identify the essential elements for mapping, infrastructure, and data analysis for the Brazilian Network of Rare Diseases.Secondary objectives are to (1) create and implement a system that allows the integration of data available in different systems of health care, social assistance, and epidemiology of RD cases (a shared electronic medical record), simplifying the access to patient data via the web by health care stakeholders; (2) promote interoperability between health information systems through the use of the Semantic Web combined with traditional communication and data exchange techniques for functional and semantic interoperability and the integration of databases to improve the management of health services data; (3) develop a single and complete database using cloud computing with an access hierarchy and well-defined security rules by building a ubiquitous platform capable of providing access services and adding syntactic and semantic value to data, covering innovative techniques such as the use of blockchain for cloud computing; and (4) develop an evidence-based portal with national protocols for monitoring and analyzing data collected or produced in several RD patient care settings, incorporating data processing, analysis, and machine learning techniques to assess the clinical situation and possible patient risks in real time.
Brazilian Rare Disease Network
Typically, information technology investigations can be distinguished as applied basic research.Basic research is scientific research focused on improving the understanding of phenomena and events [38].Applied research uses scientific studies to develop technologies and methods to intervene in natural or other phenomena, aiming to improve human interaction with such phenomena [39].
As mentioned, the study described is part of a larger project, entitled the Brazilian Rare Disease Network, with a collection of quantitative data coupled with an innovation proposal, the creation of an epidemiological surveillance service network involving university hospitals, Reference Services for Neonatal Screening (Serviços de Referência em Triagem Neonatal [SRTNs]), and Reference Services for Rare Diseases (Serviços de Referência em Doenças Raras [SRDRs]) throughout the Brazilian territory.
Considering the goal of consolidating a national network of rare diseases that covers all regions of Brazil, this study has the participation of SRDRs, university hospitals that may or may not belong to the Brazilian Hospital Services Company (Empresa Brasileira de Serviços Hospitalares) network, and SRTNs.These centers are essential for building a national database that efficiently maps and represents the situation of the field of rare diseases in a country [40].Brazil is divided into 5 regions (north, northeast, midwest, southeast, and south).The chosen participating centers are distributed across all Brazilian regions and are units of reference in health care for the population of their respective localities, according to the National Policy on Comprehensive Care of People with Rare Diseases [9].
Participating health centers are divided as follows by country regions: 6 centers in the north, 11 in the northeast, 6 in the midwest, 12 in the southeast, and 5 in the south.These include 16 Brazilian capitals that together have a total of 47 million people.In addition, as they are referral centers, they have the infrastructure to receive patients from smaller municipalities for the diagnosis and care of their population.
The area of care for people with rare diseases is structured into primary care and specialized care, following the Health Care Network (Rede de Atenção à Saúde) and the Guidelines for the Comprehensive Care for People with Rare Diseases plan of the SUS.SRDRs are responsible for preventive, diagnostic, and therapeutic actions for individuals with rare diseases or at risk of developing them, according to care axes.The SRDSs have a network of Specialized Rehabilitation Centers (Centros Especializados em Reabilitação [CERs]), which can receive patients referred from SRDSs and assist in the rehabilitation of these patients [41].
The CERs are structural components of the National Policy on Comprehensive Care of People with Rare Diseases.According to the integrality of care, these centers perform treatment, concession, adaptation, and maintenance of assistive technology, constituting a reference for the health care network in the territory [42].SRDRs and CERs work together with university hospitals to promote comprehensive and universal care for rare disease patients.
The traditional concept defines a university hospital as an institution that is characterized by four traits: being an extension of a health teaching establishment (of a medical school, for example), providing university training in the health field, being officially recognized as a teaching hospital and subject to the supervision of competent authorities, and providing more complex medical care (tertiary level) to a portion of the population and being able to receive patients from SRTNs [42,43].
The SRTNs are units with multiprofessional health teams accredited and specialized in assistance, follow-up, treatment, and redirection of newborn patients diagnosed with pathologies such as phenylketonuria, congenital hypothyroidism, sickle cell diseases, biotinidase deficiency, congenital adrenal hyperplasia, and cystic fibrosis.Such pathologies are detected in the SRTN's own or an outsourced laboratory, according to the rules established in the National Neonatal Screening Program [44].
Initially, the 3 main collaborator groups consist of 17 university hospitals, 6 SRTNs, and 17 SRDRs.The effective consolidation of the Brazilian network of rare diseases, based on the mapping of these services, depends on 3 steps: (1) approval by the ethics committee of the coordinating institution of the project, (2) approval of the local ethics committees of each participating institution, and (3) consolidation of the human resources participating in each institution through the institutional consent form.
The first step has already been completed and the others are in progress.Any divergence in these steps results in the exclusion of the participating center from the project.While these steps are in progress, representatives of all participating institutions meet monthly-on the second Saturday of the month in the morning-to discuss and structure the other activities of the project.Additionally, institutions must disseminate and invite partner services to participate in the initiative.The structuring and alignment of the final group of participants in the Brazilian network of rare diseases was finalized in August 2020.
Ethical Considerations
The National Network of Rare Diseases project was approved (Edital No. 25/2019) from CNPq, with financial support from the Ministry of Health in the amount of R $3.5 million (US $662,139.10)[45] To ensure the anonymity of patients while making it possible to track them if necessary, a password will be created for all patients, consisting of the first 2 letters of the city followed by the center number with 2 digits (from 01 for each city) and a 2-digit sequence for the patient's number.The rights, safety, and well-being of the subjects involved in the study will be the most important considerations and should prevail over the interests of science and society.
Considering the governmental efforts (ConecteSUS) [46,47], we similarly propose the use of a permissioned distributed blockchain solution that uses a key pair (private and public key) and a symmetrical consortium key for data encryption.A consortium distributed storage network will be established, consisting of research centers and other approved stakeholders throughout Brazil [48].
Authentication, authorization, integrity, and confidentiality verification mechanisms will be implemented through the establishment of a security layer.Thus, the security structure presented in this project aims to protect sensitive data for interoperability purposes.All computational techniques that support the solution, such as encryption and hashing, are well-known technologies that, when combined, can offer robust security features.In this way, each candidate system to interoperate with the rare disease ecosystem can easily meet all the necessary technical requirements.
All data collection processes will match the novel Brazilian General Law of Data Protection (federal law No. 13.709/18) [49].The law refers to the respect to user privacy, transparency in the data collection, security, and prevention of damage in personal data.Since August 16, 2020, the law covers all national territory, and its violation can cause a warning, penalties, a data block, and suspension of the project [50].As mentioned, the project will ensure the anonymity of the data during analysis.In addition, the IT team will present to all members of the network the definition and main aspects of the General Law on Protection of Personal Data (Lei Geral de Proteção de Dados XSL • FO RenderX Pessoais) using supporting materials and a patient consent form for data collection and usage, with full transparency.
RARASnet Project Management
The project management will include the cooperation and execution of several activities, including technological and technical implementation, that must be harmonized.The technical IT group will coordinate activities related to electronic resources, such as data collection instruments design, database management, and data analysis.The IT team is also responsible for maintaining a communication channel with the project's principal investigators to receive clinical administrative and clinical research input.
A set of practices that merge development and operations (DevOps) will be used as a reference to standardize the development process and align activities of software engineering, infrastructure operation, and quality assurance.As an agile methodology, DevOps allows quick delivery of a small set of requirements from concept to deployment.The method also creates efficiency in results monitoring due to continuous integration and the appreciation of high-value feedback from all stakeholders [51].
For project management and to increase collaboration across team members, Trello (Atlassian) [52] will be used, which provides easy visualization of tasks and priorities, as well as a macrovision of development stages.The workflow of a data analysis project will follow the classic steps of a knowledge discovery in databases process [53], detailed in the following subsections.
Data Collection Procedures
Initially, the instruments to be used in data collection will be framed, validated, and tested.These instruments should serve as a basis for the steps that involve the survey of retrospective data in the participating institutions and as a model for the stage involving the prospective survey and analysis.Based on an initial report characterizing the informational maturity of the collaborating institutions, online training will be given to address the functioning of the data collection instrument developed, validated, and tested for the project's retrospective phase, and the same process will be carried out later in the prospective phase of the project.The collection will be carried out through access to medical records, with data recording on portable computers acquired with funds from this proposal and carried out by fellows of the project with the support of researchers from each service.Data quality indicators will be monitored in this intervention, mainly about the difficulties encountered by institutions to codify the diseases in an interoperable way, ensuring the production of a reliable picture of the maturity of data collection of rare diseases in Brazil.
To ensure the monitoring of data quality indicators, an early hearing detection and intervention (EHDI) will be conducted, and dimensions such as completeness, uniqueness, timeliness, validity, accuracy, and consistency will be evaluated [54].Elements not present in the EHDI, such as acceptability, reliability, and flexibility, will also be considered; the use of dimensions will vary depending on the requirements of each center.These indicators were selected based on their importance in monitoring and evaluation in the National Policy on Comprehensive Care of People with Rare Diseases.It will also allow tracking results from the source to the national level and be indicative of data quality for all the indicators within a program area [55].
During data collection, phenotypic data will be described according to HPO terms, restricted to 5 terms per case, allowing the description of phenotypes of known syndromes.Information about the coding of the disease will also be presented, considering the name of the disease, the International Classification of Diseases 10th Revision (ICD-10), the Orpha number, and the gene name or symbol, thus allowing comparison with data from other platforms, such as Orphanet.
Data collection instruments (ie, case report forms [CRFs]) will be established by principal investigators and applied in distinct project phases, each with a specific objective.The development of all CRFs is guided by the National Policy of Comprehensive Care for People with Rare Diseases [56,57] in the context of the Brazilian Health Public System.
The main instruments are (1) a survey of the technical and technological resources of the participating research center, used to recognize needs and prepare and provide resources for data integration and collection across research centers; (2) a survey of procedures performed at participating centers, used to recognize the availability of technological resources for genetic diagnosis and human resources in the assistance of individuals with rare diseases; (3) a retrospective collection of clinical data, that is, the characterization of the clinical profile of patients with RDs treated throughout the country in the last 2 years; and (4) a prospective collection of clinical data, that is, the follow-up of patients with the defined RD clinical profile treated throughout the country, for the identification of changes in the clinical profile, such as in diagnosis and treatment.
After the initial development, the validation phase will take place.Key researchers, along with main investigators, will perform several rounds of revision and validation for each CRF.This process will occur until researchers reach a consensus.Then, the final version of an instrument (usually a paper-based one) will be translated into an electronic-based version.
Computational Infrastructure and Data Collection Resources
The study will rely on a computational infrastructure to satisfy technological needs during all project phases.First, cloud computing resources were acquired as an infrastructure as a service.This makes it possible to quickly scale up and down with demand.Additionally, the expense and complexity of buying, managing, and maintaining physical servers and other data center infrastructure are avoided [58].
In this case, the University of São Paulo provides a private cloud computing environment (interNuvem USP) and manages the whole infrastructure, while the project's owners only need to install, configure, and manage their own software, operating systems, and applications.Several resources, such as web, database, and data collection servers, will be available to help deliver this project outcome.
During the project, it will be necessary to collect data using CRFs.To facilitate the creation of electronic CRFs and their distribution, REDCap (Research Electronic Data Capture) and KoBo Toolbox will be used as electronic data capture systems.REDCap was built in 2004 by a team at Vanderbilt University to enable classical and translational clinical research, basic science research, and general surveys, providing researchers with a tool for the design and development of electronic data capture tools [59].
KoBo Toolbox, developed by the Harvard Humanitarian Initiative, is a free and open-source suite of tools for field data collection and basic analysis.It was initially built for use in challenging environments in developing countries, but it can be extended to any type of research [60].Both electronic data capture systems are free, although licensing is necessary for REDCap.After applying for a REDCap license of use, the RARAS REDCap Server was established, which is now part of the REDCap Consortium, a community of experts and REDCap administrators [61].KoBo Toolbox does not demand a licensing process and the software is publicly available for download and installation.
REDCap and KoBo Toolbox are integrated and can be used together.The first is used for data research, data storage, reporting, analysis, and management.The second is used exclusively in the data collection process as a front-end tool for final users, allowing responsive and offline data collection on any type of device without the need to install any third party or additional mobile app.After submitting a record in KoBo Toolbox, data are instantly synchronized with the REDCap database.This integration is possible due to a framework developed by the IT group.
Database Modeling
By exploring the data sources of the Orphanet platform related to information on medicines and rare diseases, we started the modeling phase of the database.Additionally, materials were selected for the knowledge acquisition phase for the development of a computational ontology that will reuse the Orphanet Rare Disease Ontology (ORDO) [62], thus helping the classification and hierarchization of bibliographic data on the prevalence of these diseases.After this initial analysis to select the best attributes (variables) that represent this health domain and are aligned with the profiles of the participating centers, the second stage of modeling the database is expected to start [25].
The first step is important so that the system does not request variables that are not relevant to the study, reducing the time taken to collect patient data by the health professional.More specifically, conceptual modelers describe structure models in the form of entities, relationships, and constraints, as they can also describe behavioral or functional models in terms of states, transitions between states, and actions performed on states and transitions.Finally, they can describe interactions and user interfaces in terms of messages sent and received and information exchanged.At the end of the first stage, a system requirements document must be prepared, detailing all functional and nonfunctional aspects of the implementation and application layers [31].
To facilitate the understanding of the information flow and operational processes of the participating institutions, auxiliary diagrams will be produced using the Business Process Management Notation approach.Such documents will be used during the project to validate the information from the services, which will also be useful for the implementation and maintenance phases of the database.
The second phase of the modeling, therefore, consists of mapping the model in the form of relational tables.To ensure data consistency, the mapping is done according to the rules of the relational model, which was chosen because of its simplicity and robustness and because it uses structured query language (SQL), which has become common in relational databases.To generate the first model of the proposed database, the MySQL Workbench (Oracle Corp) software will be used, which allows data management and SQL queries to be built and facilitates the administration, creation, and maintenance of several databases in the same location.In this way, the bank will be ready for use and its implementation will be dynamic, offering the scope for future updates and maintenance [32].
Data Quality Assurance
As previous stated, both the retrospective and prospective phases will collect study data using the KoBo Toolbox electronic data capture tool and store them using the REDCap server hosted at the Ribeirão Preto Medical School, University of São Paulo, Brazil.The KoBo Toolbox online data entry system will minimize the data entry errors and facilitate the monitoring and quick resolution of queries and missing data.
The data collection tools will be reviewed by other researchers and pretested on a convenient sample of records and clinical settings.Reviewers will note their individual experience with both the definitional criteria and the time taken to collect and record data.Based on the final pretest, revisions will be made to both data collection instruments.
A manual of operations will be developed to minimize the need for judgment and interpretation by the data collectors and to increase the quality of data collection done by the health care center professionals.The manual of operations will include a description of the study in general terms, emphasize the importance of complete and accurate data, and foster the standardization of data collection.
The responsible health care center staff member will maintain a problem logbook to document unanticipated problems.Technical questions encountered in the field will be resolved through consultation with the technical team and researchers responsible for the project.
To ensure that the record quality fulfills all prerequisites described in the literature and the normative documents previously mentioned, we will follow a set of recommendations described by the ERN in the RD-Connect framework, incorporating the indicators in each step of the process collection, storage, preprocessing, processing, and reporting XSL • FO RenderX [63,64].Our plan will consider aspects such as governance standards, infrastructure in compliance with the FAIR principles (findable, accessible, interoperable, and reusable for humans and computers), didactic material, and informative documents, as well as personnel training and a data quality trail.The process and tools used for each level are presented in Figure 1.
Data Management
Trained research nurses at the participating health facilities will use KoBo Toolbox data collection tools to collect data for both retrospective and prospective phases.All entries will be deidentified at the stage of data collection, and participants will be identifiable only by unique identification codes that are only accessible and known to the hospital coordinator.A customized data entry and monitoring system will be developed in the REDCap platform for this study.This data entry system will be password protected and accessible only to the database managers and study team.The system will be developed and coordinated by the study data management unit at the University of São Paulo, Brazil.
Portal Development and Data Analysis
After the identification and collection of the essential data, the IT team will be responsible for developing all the data analysis by the supervisors of specialists in the rare disease network.The analysis will serve as support for RARASnet specialists and patients to understand the main aspects of human and technical resources and the flow of rare diseases in Brazil.As retrospective and prospective data will be collected, they will serve as a base for the exploration of statistical and modeling computational methods, and with the validation of the results among specialists, the database will be incorporated into DATASUS and communicated in scientific reports and a web portal.This web portal will be one of the main practical contributions of this work.It refers to the Brazilian Digital Atlas of Rare Diseases, available through a health observatory, which aims to integrate structural information about the referential institutions working in rare diseases in the country and clinical information about the individuals assisted by these institutions.This building process will be done according to the guidelines proposed by WHO for the development of health observatories [65].
From that data organization, the analysis tools will be made available, providing health indicators to the managers (hospital, municipal, and regional).The main analyses of the web portal will be (1) the flow of patients, which will present the displacement of patients according to the place of origin and the hospital care through georeferenced maps and tables; (2) hospital indicators, which will provide the automated calculation of 31 hospital indicators, such as mortality, morbidity, capacity, and usability, aiming to observe and compare these indicators among institutions; (3) nosological profiles, which will highlight the hospital care of individuals, allowing for the characterization of morbidities in the rare disease community; (4) diseases sensitive to primary care, which will describe hospitalizations for morbidities related to primary care, facilitating the identification of hospitalization rates that could be avoided by strengthening primary care; (5) prediction of risk of death by the Charlson Comorbidity Index, which will provide the risk XSL • FO RenderX of death for patients according to their comorbidities; and (6) medical procedures, which will describe the surgical procedures performed, allowing the comparison between these procedures and the resources used [66][67][68][69].
The tools described will provide interactivity through consultation filters with spatial disaggregation (by region, health region, municipality, or a specific hospital) and temporal disaggregation.Thus, information will be able to be explored historically, geographically, and in real time, supporting different demands and decision making for rare diseases in Brazil.However, besides the web portal, which will contain general public information and resources, we will also provide all the knowledge through videos and talks using didactic language to facilitate understanding.
Results
Considering the objective list and proposed conceptual and technical model, RARASnet presents some outcomes of interest, both specific and collaborative, in 9 steps: 1. Survey epidemiology, clinical procedures, and therapeutic resources, such as the number of individuals with rare disease according to each diagnostic group, age, race, sex, and other features. 2. Create a national rare disease network with the participation of important university hospital health services in rare diseases to create a database of national rare diseases. 3. Cover all regions of Brazil concerning the main rare diseases, with institutes and number of cases stratified according to each type of condition. 4. Create a standard in sociodemographic, epidemiological, clinical, and therapeutics data with the advice of the specialists in the national rare disease network.The data should follow patterns proposed by the Ministry of Health guidelines and HPO terms. 5. Identify the type of treatments being applied in each center and those funded by SUS or by supplementary health.The goal is to have a quantitative analysis for each type of treatment to understand the overall status of rare disease in Brazil and perform public health policies. 6. Map existing diagnostic and technological resources within the network. 7. Map human resources, such as the quantity of workers and specialists available in the network in each region of Brazil. 8. Establish a network of partners to underpin collaborative studies concerning rare diseases. 9. Develop the online Brazilian Atlas of Rare Diseases according to the guidelines of the WHO [70] to help professionals, the general public, and political decisions.
The present project is in its initial stages, and a survey was completed by each reference center to evaluate the technical aspects of each health care center, such as the presence of computers, technical support staff, and a digitized system.Moreover, we are in the process of internally validating the collection instruments with specialists and principal investigators and preparing the pilot project to be carried out at the coordinating center for external validation.All the predicted methodological processes are shown in Figure 2.
For the participating centers that have already obtained the project approval from their respective ethics committees, we developed an initial data collection instrument to verify the technological infrastructure of each center and the way these institutions capture information related to rare diseases.This survey aims to list and categorize these institutions according to their methods of data storage and retrieval, which can be digital, through electronic medical records and management software, or analog, through paper-based record management.
From a total of 40 participating centers so far, 39 have already responded to this initial survey, and among these, the results showed that 23 institutions (59%) have an electronic collection and recovery system and 16 institutions (41%) have a paper collection system.This initial survey is important for our IT team to plan the best clinical data collection approach for each institution during the project, aiming to minimize obstacles through an adequate and personalized collection proposal for each center.
One of the biggest challenges of this study is aligning data collection in all participating institutions so that the process of recovering data from medical records is standardized regardless of the storage support and the methods of retrieving information used in each health unit.In our scenario, based on 39 institutions, 14 (36%) health units extract information from medical records exclusively on paper, while 2 (5.1%) have a nonapplicable data recovery method; although they store their data in a system or on paper records, their recovery process does not fit into one of these methods (eg, using applications that are not for this purpose).
Later, we will standardize and analyze the clinical and epidemiological data and use these data to develop the national network for monitoring rare diseases, using the Digital Health Observatory to make the information available.
The project had its financing approved in December 2019.
Retrospective data collection started in October 2020, and we expect to finish in January 2021.We will begin the prospective data collection in February 2021, and we expect to finish in June 2021.During the third quarter of 2020, we enrolled 40 health institutions from all regions of Brazil.We are currently receiving data to be analyzed.We expect the publication and dissemination of the findings in the second half of 2021.
Discussion
This study is currently in its initial stage.We have performed a survey of technical aspects of health care centers (eg, support staff and technological infrastructure).A pilot data collection of clinical data carried out by specialists and principal investigators is planned to underpin the instruments' validation.
Main Problems Anticipated and Proposed Solutions
The heterogeneity of the data is intrinsically connected to the type of information generated by the health services, which is considered diverse and complex.Some of the main problems normally encountered when handling health data are the highly heterogeneous and sometimes ambiguous nature of medical language and its constant evolution; the huge amount of data generated constantly by the automation of hospital processes; the emergence of new technologies; the need to process, analyze, and make decisions based on this information; and the need to ensure the safety of data related to patients [63].
To mitigate problems of heterogeneity and data standardization, we will make use of some semantic web technologies, which are presented as a fundamental approach to guarantee semantic interoperability and the integration of dispersed and isolated data sets.More specifically, we will use biomedical ontologies, which provide controlled vocabularies of scientific terminologies used to assist in the annotation of produced data, such as basic terms and their relations in a domain of interest, as well as rules to combine these terms and relations [64].As mentioned, some of the ontologies used will be ORDO and HPO.
Once the ontologies are defined, it will be possible to perform the semantic markup on the collected records present in our relational database and provide a SPARQL Protocol and RDF Query Language access point to execute queries on the data set, allowing us to make available a set of data that can be extracted by different information systems, as long as they are connected to the web.
For security reasons related to the sensitivity of the stored information, direct access by external systems to the data structure is blocked by default.Therefore, an authorization layer will be built to support the authentication processes (validation of the identity of external systems).The authorization and protection of the information transmitted will use digital signature and hybrid encryption techniques, that is, a combination symmetric (unique key) and asymmetric (public and private key pair) encryption.
We believe that through these planned solutions, obtaining information from the set of data related to rare diseases in Brazil will become possible, allowing data to be shared, reused, analyzed, and applied in other information systems, either to improve the completeness of other bases or to produce relevant
XSL • FO
RenderX knowledge to support decision-making processes in the context of rare diseases.
Applicability of the Results
In the development of digital products and services for this project, all tools must ensure that users have the freedom to interactively navigate and filter data to visualize the analysis according to personal interests.For this, the Brazilian Digital Atlas of Rare Diseases will have a filter that allows spatial disaggregation (queries by regions, health regions, municipalities, or a particular hospital) and temporal data.The filters will allow the user to set different visualization schemes without accessing the raw data and modifying the database.
It will also be possible to perform other types of data aggregation in queries, such as grouping by gender, age group, ICD-10, Orphacode, Online Mendelian Inheritance in Man (OMIM) [63], phenotypic characteristics, and other information that the health professionals involved consider relevant.It is important to emphasize that in the RD context, ICD-10 and OMIM are not able to cover all diseases with a unique identifier [71,72].Thus, when ICD-10 is used as a filter, a further option box will be opened to distinguish between diseases with the same code.
For OMIM, only genetic disorders are covered, and a note will be displayed on the website [63].Orphacode [62], on the other hand, is the nomenclature that fits all RD diseases with a unique code due to its polyhierarchical nature [71].
This approach makes it possible to measure the performance of both the institution providing the health service and the care team.The analysis of efficiency and performance will be presented through dashboards and reports in real time, which can be used for the elaboration of new models based on the results.
The database of patients with rare diseases will allow an interactive epidemiological map and detail the care journey of the main rare diseases in Brazil.In this sense, it is expected that these developments can assist the evidence-based decision-making process for rare disease services in Brazil, bringing benefits to patients, health professionals, and managers.
Plans for Validation, Dissemination, and Use of Project Results
The dissemination of the results will include the production of scientific papers in periodicals relevant to the area and the realization of scientific dissemination to the direct target audience and collaborators through workshops and training to the participating centers.Aiming for project integration and sustainability, we will make the ontologies developed available in the international repository of biomedical ontologies, BioPortal.These artifacts will therefore be able to be used in other projects around the world and updated constantly.
BioPortal is an open database that provides access to biomedical ontologies via web services, facilitating the participation of the scientific community in the evaluation and evolution of ontologies by suggesting additional resources for mapping terminologies and reviewing criteria and standards [73].
With the main results and interest topics, we intend to recruit a multidisciplinary panel for an e-Delphi [74] consensus-building exercise with the ad hoc team members.The e-Delphi method is an interactive structured communication technique to reach consensus on the responses, and it comprises an initial open round of questions to revise or suggest a list of potential items for scoring in the subsequent two scoring rounds.
Once results are validated, it is crucial "to design strategies and solutions to overcome bottlenecks that prevent proven and innovative public health interventions" from reaching the people who need them [75].For this purpose, we intend to use the WHO toolkit for implementation research.One of the WHO toolkit topics describes how to plan a rigorous research project, including identifying implementation research outcomes, evaluating effectiveness, and making plans to scale up implementation in real-life settings [76].
Once we have the findings, we intend to analyze the implementation of these interventions and strategies.For this, the reach, effectiveness, adoption, implementation, and maintenance (RE-AIM) framework [77] will be used to organize reviews of the existing literature on health promotion and disease management in different settings.RE-AIM is a tool used to translate research into action for digital technologies by measuring 5 essential dimensions for successful implementation: reach, effectiveness, adoption, implementation, and maintenance.
The overall goal of the RE-AIM framework is to encourage program planners, evaluators, readers of journal articles, donors, and policy makers to pay more attention to essential program elements, including external validity, which can improve the sustainable adoption and implementation of effective, generalizable, evidence-based interventions [78].Finally, by applying the RE-AIM framework, we can emphasize responses to improve the chances that recommendations will have a positive and sustainable impact on public health.
XSL • FO
RenderX information, a link to the original publication on http://www.researchprotocols.org,as well as this copyright and license information must be included.
Figure 1 .
Figure 1.Framework for quality management of rare disease registries.IT: information technology.
. Moreover, the main project was sent to the research ethics committee of Porto Alegre Clinical Hospital of the Federal University of Rio Grande do Sul (Hospital de Clínicas de Porto Alegre da Universidade Federal do Rio Grande do Sul) through Plataforma Brasil, a Brazilian platform of the Ministry of Health projects.The research ethics committee of Porto Alegre Clinical Hospital analyzed the research project (under code number 33970820.0.1001.5327 of Presentation Certificate for Ethical Appreciation).The research was approved (opinion number 4.225.579) on August 14, 2020.
|
2020-12-17T09:12:09.357Z
|
2020-10-07T00:00:00.000
|
{
"year": 2021,
"sha1": "5f38a2acd84559abf37ed9fa49e0ee205485fc87",
"oa_license": "CCBY",
"oa_url": "https://www.researchprotocols.org/2021/1/e24826/PDF",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0c14a0243206b69322ec75529eda736efd485cc9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
12872415
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of Experimental Wound Closure Techniques in Voice Microsurgery
Microsurgery on human vocal folds typically involves the removal of benign lesions, often results in the creation of wounds in the form of epithelial micro-flaps (Benninger, Alessi et al. 1996). Conventionally, these micro-flaps are left to epithelialize without formal closure, which can result in healing by secondary intention and increased scar tissue formation (Woo 1995; Thekdi and Rosen 2002). Scar tissue in the lamina propria of the vocal fold affects its visco-elastic and vibrational properties (Bless and Welham 2010), disrupting the mucosal wave and often manifesting as hoarseness and a reduction in the phonatory capabilities of the patient (Thibeault, Gray et al. 2002). Since the precision of epithelial approximation accomplished during the surgical procedure and retained during the healing process affects the amount of scar tissue formation (Woo 1995), wound closure is of particular interest in voice microsurgery.
Introduction
Microsurgery on human vocal folds typically involves the removal of benign lesions, often results in the creation of wounds in the form of epithelial micro-flaps (Benninger, Alessi et al. 1996).Conventionally, these micro-flaps are left to epithelialize without formal closure, which can result in healing by secondary intention and increased scar tissue formation (Woo 1995;Thekdi and Rosen 2002).Scar tissue in the lamina propria of the vocal fold affects its visco-elastic and vibrational properties (Bless and Welham 2010), disrupting the mucosal wave and often manifesting as hoarseness and a reduction in the phonatory capabilities of the patient (Thibeault, Gray et al. 2002).Since the precision of epithelial approximation accomplished during the surgical procedure and retained during the healing process affects the amount of scar tissue formation (Woo 1995), wound closure is of particular interest in voice microsurgery.
Extensive work has focused on improving wound closure methods to minimize scar tissue formation, ranging from micro-suturing which allows for primary healing (Woo 1995;Tsuji, Nita et al. 2009), to the use of tissue adhesives like fibrin glue (Bleach, Milford et al. 1997;Flock 2005;Kitahara, Masuda et al. 2005;Finck, Harmegnies et al. 2010;Skodacek, Arnold et al. 2011), and the use of chemical agents (Campagnolo, Tsuji et al. 2010) like Mitomycin-C (Branski, Verdolini et al. 2006;Fonseca, Malafaia et al. 2010) or stem cells (Hong, Lee et al.) to enhance the healing process of vocal fold wounds.However, various challenges faced in the execution of voice microsurgery add to the complexity of wound closure.These include limitations in instrument movement imposed by the laryngoscope, reduced tactile feedback in surgical instruments and loss of stereopsis.These are just some of the common challenges that can add to the intricacy of the closure of a simple wound, resulting in an increase in operation duration and associated risks under general anesthesia.
With this in mind, experimental evaluations of proposed microsurgical techniques are a necessary step in their development and optimization.Due to the rarity of human specimens for experimentation, different animal and synthetic models have been utilized instead.In this chapter, we discuss various vocal fold wound closure techniques as well as the models and methods used to evaluate them experimentally.
In the SLLP, elastin and collagen fibres are loosely arranged within a matrix, whereas dense elastin fibres make up most of the intermediate layer.Collagen is densely packed in the deep layer, providing most of the support for the lamina propria (James B. Snow and Ballenger 2003).Hirano also proposed a cover-body concept, providing an explanation for the vibratory characteristics of the vocal fold.Based on his theory, the cover (consisting of stratified squamous epithelium and the underlying SLLP) is attached to the body (consisting of the vocalis and thyroarytenoid muscles) by an elastic interface or ligament (composed of the intermediate and deep layers of the lamina propria), with an increasing stiffness from superficial to deep.This allows the cover to oscillate independently due to its elastic characteristics, resulting in the mucosal wave seen on stroboscopy and most of the vibratory dynamics required for good voice production and phonation (Hirano 1974).
Wound creation
Early treatments for benign vocal fold lesions consisted of stripping (de-epithelialization) of the entire vocal fold (Sataloff, Spiegel et al. 1995).The healing process after this method of treatment often resulted in significant vocal fold scar formation which causes a change in the stiffness and viscoelastic layered structure of the lamina propria.This inhibits normal vibration of the vocal fold, and can cause significant dysphonia and possible glottic incompetence.However with the discovery by Hirano of the layered structure of the vocal fold and its implications on healing, treatment is now focused on preserving as much of the normal vocal fold structures as possible (Hochman and Zeitels 2000;Fleming, McGuff et al. 2001;Thekdi and Rosen 2002;Burns, Hillman et al. 2009).Avoiding injury to the deeper structures is important during voice microsurgery to minimize vocal fold scarring and persistent post-operative hoarseness.
Current methods in voice microsurgery are divided into two main categories based on the surgical instruments used -either laser surgery or cold surgery.In laser surgery, a CO 2 laser is used to ablate tissue and for coagulation of the target region (Yan, Olszewski et al. 2010).Together with a micro-manipulator for precise cutting, the reduced blood loss during laser surgery enables a relatively clear view of the surgical field.Although studies have found no significant difference in surgical outcomes between laser and cold surgery (Zeitels 1996;Hormann, Baker-Schreyer et al. 1999;Benninger 2000), risk of thermal damage to surrounding tissues is still dependent on familiarity with the equipment and surgical technique.This coupled with the increased cost of equipments, maintenance, additional personnel and their training (Yan, Olszewski et al. 2010), has driven the continued use of traditional "cold" voice microsurgery techniques.
Access to the vocal folds for microsurgery typically utilizes suspension laryngoscopy (Zeitels, Burns et al. 2004), where a rigid laryngoscope inserted via the patient's oral cavity provides a direct view of the vocal folds.The laryngoscope is suspended over the patient's chest, freeing the surgeon's hands for operating.A binocular operating microscope is used to provide magnification.Due to the prohibitive space constraints of laryngoscopes, microlaryngeal instruments are thin and long to access the lesion while maximizing of the surgical field.A significant level of dexterity is needed to handle the microlaryngeal tools, especially considering the fragile structure of the vocal fold.However cold surgery allows for tactile feedback and is better utilized in techniques like the micro-flap excision of benign vocal fold lesions (Zeitels 1996), which we will focus on for the course of this chapter.
Microflap technique
The microflap technique has been accepted as the standard approach for cold surgical removal of benign vocal fold lesions (Ford 1999;Hochman and Zeitels 2000;Lee and Chiang 2009), achieving the main principles of vocal fold surgery by minimal tissue excision, minimal trauma to SLLP and epithelium.This technique typically involves the initial creation of an epithelial incision beside the lesion.Blunt dissection is used to elevate the microflap while taking care to minimise trauma to the deeper layers of the lamina propria.Only pathologic tissue is excised and the microflap is then reapproximated (Sataloff, Spiegel et al. 1995) as seen in Figure 1.
Wound closure
Following excision of the lesion, the microflap is redraped to promote primary healing (Hochman and Zeitels 2000).If there is loss of epithelium or dislodgement of the microflap, then healing can occur by secondary intention.In this case granulation tissue formation and epithelial migration occur (Woo 1995), and there is correspondingly more scar tissue formation Voice rest is usually prescribed after surgery (Ishikawa and Thibeault 2010), but even with a totally compliant patient, apposition of epithelial flaps edges can be difficult to maintain.Thus various methods like micro-sutures and fibrin glue (Bleach, Milford et al. 1997;Flock 2005;Kitahara, Masuda et al. 2005;Finck, Harmegnies et al. 2010;Skodacek, Arnold et al. 2011) have been used to improve wound closure and minimize scar tissue formation.
Microsutures
The use of microsutures in vocal fold wound closure was proposed by Woo et al in 1995, hypothesizing that microsutures would allow precise positioning of wound edges and maintenance of the approximation (Woo 1995).This would reduce exposure of the wound site and permit primary healing to occur.They carried out the procedure in 18 patients, finding improved voice results after surgery.As there was no control group and basis for comparison in Woo et al's study, Fleming et al attempted to compare the amount of scar formation with and without microsutures in a canine model (Fleming, McGuff et al. 2001).A small sample group of 4 dogs were used, with bilateral microflap defects created in each dog.6-0 fast absorbing gut sutures were used to close the microflap on only one side, leaving the contralateral side unclosed.The amount of scar was evaluated between 39 and 49 days post surgery.Un-sutured vocal folds were found to have at around 75% larger scar formation than sutured vocal folds, concurring with Woo et al's hypothesis that the use of microsutures improves postoperative wound healing.
Fleming et al also identified the length of time required for suture placement as the main disadvantage of this technique, suggesting that practice and familiarization with the technique using larger sutures before actual surgery could help mitigate the learning curve.
Tsuji et al recently proposed an improvement to the microsuture technique (Tsuji, Nita et al. 2009) by pre-tying a small length of 4-0 non-absorbable nylon suture to the free end of a 7-0 absorbable suture.The nylon acted as an anchor at the epithelial surface, preventing the thread from escaping and removing the need for an assistant surgeon to maintain tension on the free end of the suture.This improved the ease of performing the technique.Their new technique was tested on human cadaveric larynges for a total of 10 sutures and they reported a placement time of 5 to 7 minutes per suture.
Tissue adhesives
Despite good wound healing results demonstrated by micro-sutures, many surgeons prefer using adhesives to hold down epithelial flaps to achieve wound closure.Tissue adhesives such as cyanoacrylates and fibrin glue have been used (Flock 2005) and may be easier to apply than that of sutures.A potential limitation of tissue adhesives includes increased scar tissue formation if glue accumulates between the epithelial edges preventing proper approximation, or by adhering the epithelium to the underlying connective tissue without proper reformation of the intervening layered structure.Rapid curing can also restrict the surgeon from re-apposing malpositioned flaps.Lack of tensile strength of the adhesive is another concern.Fibrin glue can take several minutes for initiation of curing and several hours to develop its full strength.Especially during the curing phase it may not possess sufficient tensile strength to withstand rupture of its bond (Woo 1995).As the vocal folds vibrate at high frequencies during speech, constant shearing against the adhesive glue causes wear and the resultant debris may impede the vibratory properties of the vocal fold or result in secondary intention healing and a broader scar.
Selection of animal models
Experimentation on live humans is not possible or ethical in most situations.Cadaveric human larynges are also difficult or expensive to obtain.Hence, when studying a new technique or device, an animal model can provide a systematic platform for the experimentation and validation.However due to differences in vocal fold size and structure, one animal may not suit all research requirements.
Depending on the research question to be addressed and the methodological approach, these differences can limit applicability of experimental results.Selection of an appropriate animal model needs careful consideration.Practical issues like size, availability of animal, availability of the facilities to house or carry out the procedures, procurement cost and maintenance of the animal for the duration of the study can restrict researchers from acquiring their ideal animal model.
Characteristics of particular interest when considering operative techniques include the size, shape and position of the larynx and other upper airway structures to simulate surgical access.Similarity of vocal fold shape and location is essential for testing microsurgical techniques, while similar tissue composition is necessary when assessing in-vivo behaviour of implanted materials and tissue responses.
Rabbit models
Due to their docile nature, relatively abundant numbers, ease of housing and management, rabbits are popular animal models.Rabbits are often used in immunological studies and exhibit similar vocal fold histology to humans.However access to rabbit vocal folds by standard suspension laryngoscopy is limited due to the smaller size of the rabbit larynx.Carneiro et al (Carneiro and Scapini 2009) used rabbit s to study vo cal fol d grafts by exposing their vocal folds via a neck incision and laryngofissure.Branski et al (Branski, Rosen et al. 2005) studied the healing process of rabbit vocal fold after injury, using a neonatal laryngoscope to access the vocal fold.Campagnolo et al (Campagnolo, Tsuji et al. 2010) studied the healing effects of injectable corticosteroids after vocal fold surgery using a custom made laryngoscope for access.
Canine models
Canine models are used extensively in phonation studies.Comparing vocal fold structure across dogs, monkeys, pigs and human models using histology and laryngeal videostroboscopy, Garrett et al (Garrett, Coleman et al. 2000) found that unlike the human vocal fold, which has a higher elastin concentration in the deeper layers of the lamina propria, both pig and dog had a thin band of elastin concentrated just deep to the epithelial basement membrane zone.Just deep to this thin band, collagen and the elastin were less concentrated as in humans.The mucosal wave on stroboscopy was most similar between humans and canines and it was concluded that dog vocal folds were the most ideal for use in surgical studies due to its similarity in size, histology and mucosal wave.However, Fleming et al (Fleming, McGuff et al. 2001) noted that the slight differences in vocal fold structure like the thicker lamina propria and the lack of a well defined vocal ligament would have implications on its vibratory characteristics.Also, the higher cost and ethical considerations of using a companion animal for experimental studies are practical issues that need to be decided upon.Nevertheless, Fleming et al argued that as canine vocal fold healing was found to be similar to humans and similar human pathological conditions have been found to occur in canine models, they are still suitable for use in vocal fold microsurgery.Hahn et al (Hahn, Kobler et al. 2005;Hahn, Kobler et al. 2006;Hahn, Kobler et al. 2006) also compared collagen and elastin distribution in human, dog, pig and ferret larynges.They found that canine lamina propria collagen levels were most similar to those of humans, but on quantitative histology, elastin and collagen distribution in the human lamina propria was best matched by the porcine vocal fold.
Porcine models
Pigs are also common models for vocal fold studies.Based on our experience with pig models, the dimensions of the larynx in a 30 to 40 kg pig are similar to that of the adult human (Garrett, Coleman et al. 2000;Jiang, Raviv et al. 2001).The vocal folds have a similar configuration, and the intrinsic muscles and distribution of the recurrent laryngeal nerve is similar as demonstrated by detailed dissections of cadaveric porcine laryngeal neuromuscular anatomy (Knight, McDonald et al. 2005).Other phonatory characteristics such as rotational mobility of the cricothryoid joint, and relative size and innervation of the cricothyroid muscle have also been studied and found to be similar to that of humans (Jiang, Raviv et al. 2001); although these features are not of direct relevance to endoscopic laryngeal microsurgery.
An important difference between the pig and human vocal folds is that the pig has an additional fold in the vertical plane separated by a ventricle.The presence of a superior and inferior fold could relate to the thyroarytenoid muscle having two separate bellies (Knight, McDonald et al. 2005).It has been suggested that the inferior fold is the true vocal fold and the superior fold is akin to the ventricular fold in humans.However this remains a subject for debate as there is a further ventricle above the superior fold.It is suggested that vibration occurs at both folds as well as in the supraglottic structures during phonation (Kurita, Nagata et al. 1983;Alipour and Jaiswal 2008).
The pig larynx also differs in the structure of the arytenoid complex.The arytenoid cartilages have been described as fused across the posterior commissure, making laryngoscopic exposure more difficult (Garrett, Coleman et al. 2000).In addition to this, the arytenoids are also positioned more superiorly resulting in a steeper angle to the vocal folds.
However, from our experience with intubated animals, neither of these features created a significant hindrance to exposure or access to the vocal folds.
Regner used high-speed digital imaging to compare the vocal fold vibratory characteristics of ex-vivo bovine, canine, ovine, and porcine larynges with human vocal folds.By measuring amplitude, oscillation frequency, and phase difference of vocal fold vibration, it was concluded that canine and porcine larynges are the most appropriate models for vibratory or kinetic studies on phonation (Regner, Robitaille et al. 2010).Alipour also studied vibratory characteristics of excised pig, cow and sheep larynges, and concluded that the porcine larynx had the highest range of phonation frequencies, making it a good candidate for animal studies (Alipour and Jaiswal 2008).
In a similar study, Jiang et al. (Jiang, Raviv et al. 2001) concluded that pigs models provided the most similarity in vocal fold stiffness and was a reasonable alternative for phonation studies.As pigs are a common livestock, the high availability of pig larynges from local abattoirs poses less of an ethical concern for sacrificing animals for research purposes.
Using animal models
Extensive ex-vivo experiments have been carried out for phonation studies (Regner and Jiang ;Jiang, Zhang et al. 2003;Skodacek, Arnold et al. 2011), for modeling the vibratory dynamics of the vocal folds.These experiments allow precise and independent control of various parameters affecting phonation, enabling systematic investigation and measurements of vocal fold vibrations.
A typical setup of such experimental systems consists of a mounting assembly, a pseudo lung, humidifiers, thermometers, flow and pressure meters.The mounting assembly where the excised larynx is housed consists of one lateral pronged micromanipulator sutured to the anterior tip of the thyroid cartilage and two other micromanipulators attached bilaterally to the arytenoid cartilages.This allows the elongation of the vocal folds to be controlled precisely.Airflow is generated by either an internal building source or a conventional compressor and is conditioned by heaters/humidifiers in order to prevent the larynges from drying out.The excised larynx is clamped directly to a tube from the pseudo lung and flowmeters and pressure meters are used to measure subglottal airflow and pressure before entry to the larynx.This experimental system can be easily adapted for use in ex-vivo surgical experimentation and can provide a platform to assess the effects of surgical procedures on vocal fold vibration.
Mechanical models
Alternatively, a mechanical model was proposed by Choo et al (Choo, Lau et al. 2010) specifically for the simulation of experiments on the vocal fold.In their design, they proposed the use of agarose as a material substitute for human vocal folds, mapping the mechanical properties of agar concentrations to that of vocal fold cover and ligament.By repeated casting of different concentrations of agarose into a mould, the phantom vocal folds were designed to mimic the layered structure of the vocal fold.In addition, vocal fold vibration was actuated externally with the use of vibrators, allowing the control of the vocal fold vibration frequency.Glottal gap and airflow could also be customized.
Using stroboscopy, Choo et al observed vibratory dynamics in their mechanically driven model similar to that of the mucosal wave in human vocal folds.After simulating a microflap and then subjecting the vocal fold phantom to vibration, cracks were found propagating radially outwards.Both these features suggested that the setup had potential for surgical experimentation.
New vocal fold wound closure device -Bioabsorbable microclips
A large part of our work is focused on the development of bioabsorable surgical microclips for vocal fold wound closure.Based on combining the ease and efficiency of using fibrin glue with the precision of microsutures, such surgical microclips have the potential to reduce vocal fold scar and procedure time, cumulating in cost savings and reduced morbidity for patients.
Surgical clips have been used in various areas of the body but have not been described previously for use on the vocal folds.This may be due to challenges facing the design of a surgical clip for application in this area, including the need for extremely small size, ability to withstand high vibration frequencies and shearing stresses during phonation, and the need for bio-absorbability.A number of materials have been studied in the design of surgical clips in other areas.Stainless steel clips and materials such as titanium and tantalum have been used for example to ligate the cystic duct and artery in laparoscopic cholecystectomy (Charara, Dion et al. 1994) However, some limitations of these materials include significant foreign body reaction, poor holding power and significant interference with roentgenologic studies like computerized tomography (CT) and magnetic resonance imaging (Klein, Jessup et al. 1994;Min Tan and Okada 1999;Pietak, Staiger et al. 2006;Rosalbino, De Negri et al. 2010).The introduction of ligating clips manufactured from novel polymers such as polydioxanone in laparoscopic cholecystectomy helped to address these limitations.These clips are completely absorbed in the process of ester bond hydrolysis over a period of 180 days and the by-products are excreted by urine.Moreover, these clips produce minimal tissue reactivity with good adhesion and are radiolucent (Klein, Jessup et al. 1994).
Earlier investigations using clips constructed from such polymers proved unsuitable for our requirements, as they could not provide adequate structural strength due to the minute size of the clips.As such, we are investigating the potential of using magnesium as the main bioabsorbable material to construct such microclips.
There are many reviews on the potential and viability of magnesium as a biomaterial (Pietak, Staiger et al. 2006;Witte, Hort et al. 2008;Zeng, Dietzel et al. 2008).Most of these studies focused on the use of magnesium in orthopaedic implants and bio-absorbable vascular stents, concentrating on improving its mechanical properties by alloying with various elements.Zhang et al. (Zhang and Yang 2008) reported significant improvement of both biocompatibility and mechanical properties with use of Zn as an additional alloying element to Mg-Si.Gu et al. (Gu, Zheng et al. 2009) reported good biocompatibility of magnesium with various alloying elements, recommending Al and Y for stents and Al, Ca, Zn, Sn, Si and Mn for orthopaedic implants.Drynda et al. (Drynda, Hassel et al. 2010) developed and evaluated fluoride coated Mg-Ca alloys for cardiovascular stents, reporting good biocompatibility and better degradation behaviour.However, as pure magnesium has been found to corrode too quickly in the low pH environment of physiological systems, much effort has also been placed into developing alloys or coatings to limit its degradation behaviour (Zeng, Dietzel et al. 2008).Rosalbino et al. (Rosalbino, De Negri et al. 2010) reported improved corrosion behaviour of Mg-Zn-Mn alloys for orthopaedic implants.Kannan et al. (Kannan and Raman 2008) studied the corrosion of AZ series (Al and Zn) magnesium alloys with the further addition of Ca, reporting significantly improved corrosion resistance with a reduction in mechanical properties (15% ultimate tensile strength and 20% elongation before fracture).Zhang et al. (Zhang, Zhang et al. 2009) reported the use of dual layer coatings of hydroxyapatite to considerably slow down the degradation of 99.9% pure magnesium substrates without heat treatment.
Based on the good biocompatibility and healing results demonstrated by these previous studies, we hypothesized that a bio-absorbable magnesium clip will be able to hold the wound site more securely and facilitate better healing as compared to surgical glue adhesives.Furthermore, with a design specifically aimed to reduce technical complexity in achieving apposition of epithelial flaps, a specifically designed prototype applicator could improve the ease of handling and speed of insertion, possibly translating to improved surgical outcomes.Due to the difficulty of simulating the vocal fold environment for both mechanical and bioabsorbability studies, in-vivo experiments were carried out to evaluate the feasibility of the clips in accordance to an approved protocol.
In-vivo evaluation of microclips
A 30-40 kg pig has upper airway dimensions that provide reasonable approximation to that of an adult human.Using this in-vivo model we were able to approach the larynx www.intechopen.comOtolaryngology 124 using a standard adult operating laryngoscope (Promed 222mm operating laryngoscope, Tuttlingen, Germany).To simulate endoscopic laryngeal microsurgery, the pig was positioned supine with the cervical spine slightly flexed.The laryngoscope was passed trans-orally following intubation with a size 5 endotracheal tube.As in most mammals, the epiglottis is intra-nasal and must therefore be drawn down into the oropharynx in order to access the vocal folds during laryngoscopy; if per-oral intubation is performed, this is usually accomplished during intubation.The laryngoscope was suspended on a custom made frame that enabled adjustments to be made to the position of the scope's tip, so as to optimize visualization of the vocal folds.By combining this with a 400mm focallength binocular microscope, the setup as seen in Figure 3 was close to that expected during surgery in an adult human.A longitudinal incision was made on one or both vocal folds using a sickle knife.An epithelial flap was elevated using micro-forceps and a dissector.The flap was then replaced and secured with either micro-clips (3-6 clips on one side), microsuture or fibrin glue.The animal was monitored daily until the end of the three weeks study, after which the animal was sacrificed and its vocal fold excised for histological evaluation.
Feedback on the surgical procedure for the microclips was generally positive.Implantation time was found to be less than a minute per microclip due to the straightforward nature of the application technique.Due to the limited workspace within the laryngoscope, microsuturing was found to be more complex than applying the microclips, which greatly simplified approximation of the vocal fold wound edges.From preliminary results of the excised vocal folds after sacrificing the pigs, there was no damage found on the contralateral vocal folds, demonstrating the safety of the microclips.We are still awaiting histological results, but scar formation is comparable to that of using sutures based on visual inspection.
Conclusion
We have given an overview of the current techniques used clinically for vocal fold wound closure and an update on the potential of some microsurgical techniques proposed in current literature.Animal and artificial models have been discussed, highlighting the complexities of selecting appropriate experimental models and methods for evaluation of vocal fold microsurgery.We shared our experience in experimental microsurgery with respect to wound closure, specifically addressing the vocal fold microclip, which is a new wound closure device.The methods for testing the integrity and bio-absorption properties of such devices in vivo and the technical challenges of applying such devices accurately during microsurgery in the larynx were also discussed.
|
2017-09-17T05:32:23.959Z
|
2012-05-23T00:00:00.000
|
{
"year": 2012,
"sha1": "e5024d74f29c397ad4aeee063662de8a8e72edbe",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/37030",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9d9e38d3243eb3c7517f537248da802fdafe1885",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118667493
|
pes2o/s2orc
|
v3-fos-license
|
Light Echoes From Supernova 2014J in M82
Type Ia SN 2014J exploded in the nearby starburst galaxy M82 = NGC 3032, and was discovered at Earth about seven days later on 2014 January 21, reaching V maximum light around 2014 February 5. SN 2014J is the closest SN Ia in at least four decades and probably many more. Recent HST/WFC3 imaging (2014 September 5 and 2015 February 2) of M82 around SN 2014J reveals a light echo at radii of about 0.6 arcsec from the SN (corresponding to about 12 pc at the distance of M82). Likely additional light echoes reside at a smaller radii of about 0.4 arcsec. The major echo signal corresponds to echoing material about 330 pc in the foreground of SN 2014J, and tends to be bright where pre-existing nebular structure in M82 is also bright. The second, likely echo corresponds to foreground distances of 80 pc in front of the SN. Even one year after maximum light, there are indications of further echo structures appearing at smalle radii, and future observations may show how extinction in these affect detected echo farther from the SN, which will affect interpretion of details of the three-dimensional structure of this gas and dust. Given enough data we might even use these considerations to constrain the near-SN material's shadowing on distant echoing clouds, even without directly observing the foreground structure. This is in addition to echoes in the near future might also reveal circumstellar structure around SN 2014J's progenitor star from direct imaging observations and other techniques.
Introduction
On January 14, 2014, SN 2014J flared into view in M82 (Zheng et al. 2014), to be discovered on January 21/22 (Fossey et al. 2014), perhaps the closest SN Ia since SN 1885 in M31, 1 and the closest SN of any type observed since SN 1987A. SN 2014J is also special for its appearance in the highly active starburst galaxy M82 (0.9 kpc from its center), but being a SN Ia this supernova will sample a region of space in M82 that is not pre-determind to include a star-formation region, as in the case of a core-collapse SN.
Observations
These observations result from a HST /WFC3 program (#13626: Crotts, PI) to observe properties of the light echoes and progenitor environment around SN 2014J. They consisted of four series of short exposures, primarily in single-orbit visits, with the idea of finding increasing deeper imaging structure as the SN fades. In the latest of these four epochs the SN was sufficiently faint to reveal the echo signals discussed here without being swamped by the SN itself. All four visits will be useful for further investigations to be discussed in later work. The primary observations used in this paper are a total of 576s exposure in the F438W filter, 560s exposure in the F555W filter, and 512s in F814W. These 1 SN 2014J in M82 = NGC 3034 is at a distance of 3.5 ± 0.3 Mpc, while SN 1885 in M31 was 0.8 Mpc away. SN 1937C in IC 4182 was 4.0 ± 0.5 Mpc away, SN 1986G in Cen A was 3.9 ± 1.0 Mpc distant, while SNe 1895B and 1972E in NGC 5253 were at 3.9 ± 0.7 Mpc.
-4data were taken on 2014 September 5.9 (= JD 2456906.4 = MJD 56905.9 = 234.2 days after the estimate appearance of SN 2014J on 2014 Jan 14.75 = MJD 56671.75 and 213 days after maximum in V). The point-spread function for each of these two bands is derived from an 8s exposure in F814W and 40s in F438W on 56727.8 (day 56.0), and an 128s exposure in F555W on 56781.1 (day 109.6). As a point of reference, our photometry of SN 2014J on day 234.2 in F438W, F555W and F814W (STMAG = 16.70,16.62 and 16.83,respectively) transforms roughly to (B, V, I) values of 16.9, 16.7, and 16.8, with B more uncertain.
Analysis
A light echo at a same distance in the SN foreground will appear as a ring or arc of light of a constant radius of curvature. That ring or arc will appear centered on the SN, unless the sheet of reflecting material is tilted versus the sightline from the observer to the SN, in which case it will appear as a ring/arc off-center from the SN. Any echo, therefore, is composed of a composite of rings or arcs, even in the case of the SN imbedded in reflecting nebulosity, in which case these rings/arcs can extend to zero angular radius, in an extended fuzz of illumination. Because of these characteristics, echoes have a strong tendency to appear as arcs/rings centered on the SN, unless they are at small angular radii.
A small angular radius in this case is on the scale of ct at the distance of M82, where c is the speed of light and t is the time since the light pulse maximum (213d for these observations). At the distance of M82, this corresponds to 0.023 arcsec diameter, 58% the width of a WFC3/UVIS pixel, which is unresolved, and inaccessible for faint surface brightnesses, due to the bright point source of the SN.
The foreground distance z of echoing material is approximated by the expression z = r 2 /2ct − ct/2, where r is the physical distance transverse to the Earth-SN sightline, to -5the echo's position. This equation for a paraboloid is an accurate approximation for the ellipsoid with one focus at Earth and one focus at the SN, with a major axis longer than the Earth-SN distance by ct. One notable characteristic of echoes is that for z > 0, and for a sheet of material even roughly perpendicular to the Earth-SN sightline, the apparent transverse motion of the echo is almost always faster than lightspeed, hence a reliable signature for the presence of an echo.
For an echo of high surface brightness, we might be able to detect it over the bright, in PA over 95 • < PA < 240 • and every 10 degrees in PA over −60 • < PA < 60 • . The position of the echo is taken is taken by the centroid of the echo peak seen it these cross cuts. No such peak is seen over 60 • < PA < 95 • and 240 • < PA< 300 • . There is a hint of an echo at smaller radius r ≈ 0.3 arcsec, PA≈ 290 • . We refer to this signal as a possible echo, while the ring at r ≈ 11 pc is highly probable. Indeed one can find echoes similar in appearance around other SNe e.g., the echo from the 8 pc diameter contact discontinuity around SN 1987A Bond et al. (1990, Crotts et al. (1991). Any other plausible explanation would require apparent motion of about 50 times the speed of light. Since the only other such known such superluminal apparent motion mechanism involves nearly lightspeed jets point nearly towards Earth, which in this case would imply such a near-lightspeed cone of light aligned within a few arcsec of the Earth-SN direction, an alignment with an a priori probability of approximately 10 −14 . appearing farther from the SN in its southern extremes, and the northern structure more perpendicular to the sightline). At the distance of SN 2014J from the galaxy's center 0.9 kpc out on the major axis, most of the gas and dust there appears within 1 kpc of the axis (ignoring that M82 might be edge-on). The observation that SN 2014J is at least 300 pc behind major structure (probably coincident with prominent luminous nebulosity seen in color of the echo itself is 0.75, about 0.4 mag bluer, hence with a wavelength dependence in scattering efficiency of Q scat ≈ λ −1.5 . This is similar to the echo photometry from SN 1987A, in which the echoes show B − V = 1.1 to 1.2 (e.g., (Suntzeff et al. 1988)), whereas the maximum light colors of SN 1987A were 1.6 (Hamuy et al. (1988), Menzies et al. (1987), Catchpole et al. (1988)), also 0.4 mag redder than its echoes.
Discussion
The value of Q scat and other issues depends on the assumption that the extinction and scattering along the direct sightline from SN to Earth is identical to the reflected path from SN to echoing material to Earth, a deviation of only 2 • . However, the environment around the SN is complex, and the SN resides in a dark lane (in projection) while the echoing material does not. Perhaps with upcoming echo data, this assumption can be tested. The brightest nebulosity to the immediate East of the SN extends over radii of 5 pc to 15 pc from the SN, which the echo at z ≈ 300 pc will traverse by about March 2015, at which point it will enter another dark lane. Similarly, echoes at this z distance will enter other dark lanes over the next few years. There will be several opportunities to study these dark lanes before the close of this decade.
While SN 2014J is famously in the same field as ultra-luminous X-ray source M82 ULX-1, it is still to far for the echoes to span this distance in many human lifetimes.
Eventually these echoes may impnge on the environments of M82 planetary nebula M82 121 and X-ray point source CXO J095544.57+694028.1, which are still decades if not centuries -9away in echo travel.
Conclusions
At least one and prehaps two light echo signals are detected from SN 2014J corresponding material about 300 pc and perhaps 80 pc in the SN foreground, and the former is well-correlated in spatial extent with structures seen in two-dimensional projection. is foreshortened by a factor of four. The echo ring at about 11 pc radius is portrayed as the gray forms above and below the sightline (N is up, E out of the page). The density of the grayscale for these forms indicates the brightness of the echo, with features behind the paraboloid also suppressed in grayscale. Note the primary echo complex at a distance z of about 300 pc in front of the SN, which has the rough shape of two inclined surfaces, at large distance from the SN due north and south, and smaller distance east and west, consistent with the oval shape of the echo ring. The candiate echo at smaller radii maps to about z = 80 pc.
|
2015-03-13T22:24:36.000Z
|
2014-09-30T00:00:00.000
|
{
"year": 2015,
"sha1": "d698adfd098802f00c753f39e25fa5ef89d44e19",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1409.8671",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d698adfd098802f00c753f39e25fa5ef89d44e19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233176828
|
pes2o/s2orc
|
v3-fos-license
|
Process-property correlations in laser-induced graphene electrodes for electrochemical sensing
Laser-induced graphene (LIG) has emerged as a promising electrode material for electrochemical point-of-care diagnostics. LIG offers a large specific surface area and excellent electron transfer at low-cost in a binder-free and rapid fabrication process that lends itself well to mass production outside of the cleanroom. Various LIG micromorphologies can be generated when altering the energy input parameters, and it was investigated here which impact this has on their electroanalytical characteristics and performance. Energy input is well controlled by the laser power, scribing speed, and laser pulse density. Once the threshold of required energy input is reached a broad spectrum of conditions leads to LIG with micromorphologies ranging from delicate irregular brush structures obtained at fast, high energy input, to smoother and more wall like albeit still porous materials. Only a fraction of these LIG structures provided high conductance which is required for appropriate electroanalytical performance. Here, it was found that low, frequent energy input provided the best electroanalytical material, i.e., low levels of power and speed in combination with high spatial pulse density. For example, the sensitivity for the reduction of K3[Fe(CN)6] was increased almost 2-fold by changing fabrication parameters from 60% power and 100% speed to 1% power and 10% speed. These general findings can be translated to any LIG fabrication process independent of devices used. The simple fabrication process of LIG electrodes, their good electroanalytical performance as demonstrated here with a variety of (bio)analytically relevant molecules including ascorbic acid, dopamine, uric acid, p-nitrophenol, and paracetamol, and possible application to biological samples make them ideal and inexpensive transducers for electrochemical (bio)sensors, with the potential to replace the screen-printed systems currently dominating in on-site sensors used. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s00604-021-04792-3.
Introduction
Low-cost, distributed electrochemical sensors can be used to address challenges in many fields of modern life, such as environmental monitoring, industrial manufacturing, and food production, and-most prominently-healthcare. Exemplary electrochemical biosensors for the measurement of blood glucose concentration have matured through decades of research and were successfully commercialized by several companies [1,2]. These small disposable electrode strip sensors combined with a handheld glucose meter and supplied with a few microliters of blood from a pin-prick enable now millions of diabetes patients to manage their daily lives. Motivated by this success story, researchers hope to enable users to measure many other chemical parameters related to health, mostly applied to diseases where the patients benefit from continuous or frequent monitoring. While the proposed sensors naturally feature different details in their detection mechanism to provide suitable sensitivity and specificity for a given application, many rely on the same concept of electrochemical detection with a disposable, modified electrode, which means electrode materials as well as the production process need to be reliable and inexpensive. At the moment, the most common commercial electrode type is the screen-printed carbon electrode [3][4][5]. However, in most recent years, among the many nanomaterials proposed as better electrode material, laserinduced graphene (LIG) was described which has the potential to truly challenge and eventually substitute screenprinted electrodes (SPEs) [6,7].
LIG is a porous graphene-like material that can be created by simply pointing a sufficiently strong CO 2 laser onto commercial polyimide foil (e.g., Kapton®) in an ambient environment. The exact mechanism of the conversion has not been fully described and is also not the topic of this publication, but the following can be assumed to take place. A strong focused laser beam with a pulse length on the order of microseconds locally heats the substrate to temperatures above 2500°C [8] followed by rapid cooling. At least part of the irradiated polymer immediately melts/evaporates, causing the porous nanoand microstructure observable by scanning electron microscopy (SEM). Photothermal [8] bond breaking rearranges the structure into mostly sp 2 -carbons while the carbon content increases from below 70 to above 90%, as non-carbon elements partly remain in the material, possibly incorporated as heteroatoms in the dominating hexagonal carbon rings, but mostly evaporate as gases. Carbonization is readily confirmed by a color change from orange to black. Examination of the created material via transmission electron microscopy (TEM), X-ray diffraction measurement (XRD), and Raman spectroscopy further reveal high similarity to few-layer graphene, confirming the sp 2 carbon structures and giving rise to the name laser-induced graphene [6,9].
With a commercial computer-controlled laser cutter, one can create two-dimensional patterns, a process generally termed direct laser writing (DLW). The obtained LIG patterns can have a spatial resolution of roughly 25 to 150 μm, depending on the employed focusing lens, and can be used as electrical transducers for various applications. DLW on Kapton film was first reported by the Tour group at Rice University in 2014 which also coined the term LIG and emphasized its possible application for the production of microsupercapacitors [6]. In subsequent publications, they reported areal capacitances of 4-16.5 mF/cm 2 for pristine and heteroatom-doped LIG, which could be raised to 934 mF/cm 2 by deposition of MnO 2 for additional pseudocapacitance [10]. Since then, the material has been further characterized and variations of the manufacturing process have been explored by choosing different substrates-among them wood, paper, and even coconut shells-or by changing the composition of the lasing atmosphere [11,12]. Much of this has been covered in two recent reviews published by the Tour group [9,13].
Since LIG is a graphene-like pure carbon material that can be made inexpensively in any chosen two-dimensional shape, it seems an obvious candidate for disposable electrodes used in electrochemical sensing, an area, which is dominated by screen-printed carbon electrodes (SPCE) at the time. The largest existing market for disposable biosensors is blood glucose monitoring for diabetes patients and commercial electrodes used in blood glucose sensing are made by screen-printing, an inexpensive method suitable for mass production. In this method, a suspension of conductive material is transferred onto a support through a fine mesh, except in those places covered by a stencil, defining the desired shape of electrodes, leads, and contacts. Afterwards, the pattern is baked to solidify the suspension. With LIG electrodes, on the other hand, no material other than the polyimide foils is needed, and no baking step is necessary; the pattern is created without the need for any additional binder substances and can thus easily withstand most organic solvents. Furthermore, less material is wasted in this process and a change in pattern design is easily realized as it only requires a new drawing on the computer. In contrast, stencils must be prepared for each design change in the SPCE printing process. These differences suggest that LIG electrodes have a high potential to become a preferable electrochemical transducer for point-of-care applications as their production ought to be simpler, less expensive, and easily adaptable to a roll-to-roll fabrication process. We and others have employed LIG for electrochemical sensors, e.g., Nayak et al. have shown that the oxidation peaks for the biological analytes dopamine, ascorbic acid, and uric acid could be well resolved in pulsed voltammetry on LIG [14]. Fenzl et al. used pyrenebutyric acid to couple a thrombin aptamer to the LIG matrix and demonstrated that it can be successfully used in an aptasensor [15]. Also, recent work from our lab describes a combination of amperometry-, potentiometry-, and impedance-based sensors made from LIG for the measurement of lactate, potassium, and conductivity in sweat [16]. Other groups have modified DLW-created carbon electrodes with copper nanoparticles alone or in combination with diamine oxidase to detect glucose or biogenic amines respectively [17,18]. The group of Claussen has used LIG to make ion-selective electrodes for the detection of ammonium and nitrate in soil [19] or ammonium and potassium in human urine [20]. Cardoso et al. and Beduk et al. reported the successful use of molecularly imprinted polymers (MIP) on LIG electrodes to detect chloramphenicol and bisphenol A, respectively [21,22]. More examples for the use of this material in electrochemical biosensing are listed in recent review papers by Kurra et al. [7] and Lahcen et al. [23].
Through all of these studies, it is generally recognized that the laser power, the laser beam size at the substrate surface, the speed at which the laser beam spot progresses during writing (i.e., the scan speed, v), and the spatial laser pulse density (i.e., how many times the laser fires per distance traveled over the substrate) all influence the procedural outcome. That is, they give rise to different structures in the nano-and micrometer regime, as well as differences in the Raman spectrum, electrical conductivity, and hydrophilicity of the interface. We postulated, therefore, that these parameters will have a significant impact on LIG's usability as electroanalytical transducer since electroanalytical performance is mainly determined by conductance, electron transfer, and electrode surface area. Here, we report on a detailed study investigating the performance of LIG electrodes fabricated over a large parameter space of laser power, scan speed, and different pulse densities in an effort to provide a systematic evaluation and understanding of how LIG fabrication parameters influence performance in electrochemical sensing. The reported observations were made with a specific commercial flatbed laser engraving system but can be translated to other machines that operate in a similar manner. The findings will also support future work on binder-free carbon electrode materials and help elucidate the relative importance of LIG characteristics on actual electroanalytical performance in real-world settings.
Electrode fabrication
Electrode patterns were designed on a computer with the vector graphics software CorelDraw. A model VLS2.30 laser cutter (Universal Laser Systems, Scottsdale, AZ, USA) equipped with a 30 W CO 2 -laser (10.6 μm) and focused beam diameter of approx. 125 μm (2″ lens) was used to create LIG electrodes on Kapton foil. The foil was cut into sheets of suitable size and the borders were fixed with tape directly onto the machine's engraving table which was brought into focal distance. The z-distance yielding the smallest laser spot mark on a test material was regarded as the accurate focal distance with an absolute value of an estimated 5.1 ± 0.1 cm. A fabrication scheme and two examples for electrode designs are shown in Fig. 1.
The power setting (1 to 100% of 30 W), movement speed of the lens carriage in x-direction (1% to 100% of 50 inches/s), and spatial laser pulse density (with the fixed combinations of 500 by 500, 1000 by 1000, or 1000 by 2000 PPI (pulses per inch in x-and y-direction)) were varied using the machine software. Percent values are used here instead of physical units to report power and speed settings (e.g., "1% power" instead of "0.3 W") because % power and measured power scale linearly only approximately so that a direct translation into physical units may in some cases be imprecise. For brevity, the following shorthand was adopted for scribing conditions, in which e.g., "1/10/1000 × 2000" means "1% power, 10 % speed, 1000 PPI in x-direction, and 2000 PPI in y-direction." The distinction between x-and y-directions, indicated in Fig. 1a, is meaningful because the lens carrier will only travel quickly in the former and more slowly in the latter direction, which is inevitable due to the way the positioning system is constructed, as the laser moves with the set speed and frequency along the x-axis followed by a move in the y-direction to then continue the scan at the set speed and frequency back along the x-axis. Nonetheless, the device delivers the desired PPI in each direction. Air was extracted from the scribing chamber continuously during operation to remove soot and any emerging gases.
Pure argon and nitrogen gas could be supplied directly to the processing chamber to modulate the atmosphere. The gases were released directly at the lasing spot through the airassist cone (see Fig. S1 in Online Resource 1). However, in contrast to the experiments with controlled-atmosphere chambers done by Li [12] and others, the supplied gases were inevitably diluted with air that was continually sucked into the (non-airtight) scribing chamber by the extraction system. Therefore, the lasing atmosphere was enriched in Ar or N 2 but measured data on gas composition is not available.
After scribing, electrodes were rinsed with water, then isopropanol, and then dried under a nitrogen stream. Transparent nail polish was painted over the leads to restricting contact with electrolyte solution only to the electrode area. Post-scribing steps can be viewed in the supplied video files (Online Resources 2-5). Electrodes were stored in non-airtight boxes at room temperature in the lab, until used.
Electrochemical analysis
For all electrochemical measurements, the analyte was dissolved in 1 × PBS (pH 7.4). A PalmSens4 potentiostat (PalmSens BV, Netherlands) was used. For screening measurements, in which the same electrolyte could be reused many times, the simpler electrode design version of LIG ( Fig. 1b, top) was clamped as working electrode (WE) and dipped into approx. 10 mL of electrolyte together with a platinum wire as a counterelectrode (CE) and a pole Ag/AgCl reference electrode (RE) (BAS Inc., USA). On the other hand, the 3-electrode design (Fig. 1b, bottom) was more convenient for the recording of calibration plots and working with small volumes of electrolytes. Here, a 50-μL droplet was placed onto the LIG electrode (WE and CE) and a Ag/AgCl reference electrode was contacted from above. The small LIG electrode in the 3-electrode design was intended as the basis for a future reference electrode but not used in these investigations. Figure S2 shows photos of both cell arrangements. Voltammetric peak potentials and heights were detected either fully-or semi-automatically through the software PSTrace 5.8 with a linear baseline. For the fabrication parameter survey, cyclic voltammetry (CV) of 5 mM K 3 [Fe(CN) 6 ] in PBS + 0.1 M KCl (pH 7.4) was carried out with a scan rate of 50 mV/s at 1 mV steps and the peak-to-peak distance was calculated as ΔE p = E p,ox − E p,red to serve as a measure of electrode quality. CV with scan rates between 25 and 200 mV s −1 was then recorded for a selected electrode type to determine its electrochemically active surface area (ESA) and effective heterogeneous electron transfer rate for (k 0,eff ), as described in the supplementary information. Square-wave voltammetry was used for quantitative analysis and run at 5 mV step, 50 mV amplitude, and a frequency of 10 Hz.
Sheet resistance
The electrical sheet resistance was measured with a fourpoint-probe apparatus built in-house: four spring-loaded golden pins, arranged in a straight line and spaced equally by 1.5 mm were used to contact the center of a 1-× 1-cm LIG sample. The probing current was 100 μA.
Physical characterization
Infrared reflectance spectra were recorded over the range of 4000 to 650 cm −1 (8 cm −1 resolution) on an Agilent Cary630 equipped with a diamond single bounce attenuated-totalreflectance attachment. 32 scans were averaged for sample and background each. The elemental composition of Kapton and LIG was measured via combustion analysis on a Vario Micro Cube. Raman spectra were collected from 50 to 3500 cm −1 on a Thermo Fisher DXR Raman microscope with a 532-nm laser set to 8-mW power and a ×50 objective with an estimated focal spot diameter of 0.7 μm. 16 scans were averaged per spot.
X-ray photoelectron spectroscopy (XPS) studies were carried out in a Kratos Axis Supra DLD spectrometer equipped with a monochromatic Al Kα X-ray source (hν = 1486.6 eV) operating at 150 W and a vacuum of~10 −9 mbar. Survey and high-resolution spectra were collected with pass energies of 160 eV and 20 eV respectively. Samples were mounted in floating mode in order to avoid differential charging. Charge neutralization was required for all samples. Binding energies were referenced to the sp2 hybridized (C=C) carbon for the C 1 s peak set at 284.4 eV from graphene.
SEM images were obtained with a Zeiss LEO 1530. Water contact angles were recorded with the sessile drop method on an OCA 25 system with corresponding software (DataPhysics Instruments GmbH, Filderstadt, Germany).
Results and discussion
Laser-induced graphene was studied as an alternative, carbonaceous material for electrochemical (bio)sensing applications. Its promising characteristics described by us earlier [15,16] indicate that it may be not only an alternative but a superior transducer material for electrochemical point-of-care sensors. Not much is known about the effect of energy input during the fabrication in correlation to electroanalytical performance, only anecdotal data are available describing obtainable micromorphologies. By systematically studying process parameters that influence the energy input and correlating it to micromorphologies, Raman, and especially electroanalytical characteristics, this knowledge gap was sought to be filled by the study.
Understanding of scribing parameters' effects on laser-induced graphene
We created LIG electrodes with a circular working area (d = 3 mm, A = 0.071 cm 2 ) connected via a bridge to an electrical contact area for electrochemical testing (see Fig. 1c). To assess the influence of available laser instrument parameters, the power-and speed settings as well as spatial pulse density were systematically varied. The electrodes were judged by mechanical integrity and visual appearance, represented by the heatmap overview for a medium pulse density setting (Fig. 2). Representative different visual appearances are shown in the inset. Electrodes of homogenous texture that would withstand bending without delamination were created by a suitable combination of power and speed (area of green color in Fig. 2). On the contrary, too much energy input resulted in brittle electrodes that would peel off the substrate easily. This was the case when the power setting was not ideal and hence a bit too high or the speed a bit too low (red and orange color). At low power and high-speed settings, the substrate was not carbonized or only partially (brown colors). When the energy input was just at the lower limit necessary for carbonization, only part of the desired shape was carbonized and instead triangular shapes appeared (see the second electrode from bottom in the inset of Fig. 2). This odd phenomenon stems from the carbonization starting at randomly located but energetically favorable nucleation points on the substrate. Those initially carbonized islands then promote carbonization in the immediate vicinity through increased absorbance which causes the lines of converted LIG to become longer with each consecutive sweep and create the observed triangular patterns with the tip facing upwards. The tips face downwards when the laser scanning direction is reversed (i.e., going from bottom to top regarding the y-direction).
While all green-marked laser settings generated visually similar electrodes, conductivity, and electrochemical activity differed. Cyclic voltammograms (CVs) were recorded on a series of electrodes in whose fabrication all parameters but one were kept constant to demonstrate the influence of a single parameter. The settings of 1000 × 1000 PPI and a constant power of 30% were chosen, because at this point, the speed could be varied on a wide range while producting functional electrodes (see Fig. 2). The resulting CVs in Fig. 3a show that peak-to-peak separation (ΔE p , Fig. 3b) and sheet resistance (Fig. 3c) both increase with scribing speed. To remove the influence of lead resistance from CV analysis, the leads were painted with conductive silver paste. Consequently, ΔE p values dropped overall and were influenced less by the scribing speed. Apparently, the high resistance of the LIG leads causes significant potential drop (iR-drop) between electrode working area and connecting clamp, which distorts the shape of the voltammograms. Longer leads caused higher drop (resistivity factor) but also larger electrode surfaces, higher concentration of redox species, or increased scanning speed in potential sweeping experiments (current factor). Strategies to prevent iR-drop therefore include the use of small electrodes and low redox species concentrations, if application of conductive paint is to be avoided. The lead dimensions of a designed electrode are usually dictated by practical reasons and therefore offer little room for adjustment. However, with optimized laser settings (see below), the sheet resistance of LIG could be reduced to as low as 10-20 Ω sq. −1 which greatly reduced the iR-drop problem (the value of 10-20 Ω sq. −1 was not a specific target, but it was the best achievable). Therefore, Fig. 2 Heatmap of electrode outcome vs. power and speed settings at a pulse density of 1000 × 1000 (x by y) (color code as indicated in the inset: green = ok, darker brown = partial scribing (PS), lighter brown = no effect (NE), orange = LIG peeled off from substrate (PO), red = laser burned through substrate (B)) it was possible to make an all-LIG electrode without additional processing steps, like painting the leads, when the right scribing conditions are used.
Furthermore, the influence of laser pulse density was investigated. The laser cutter used in this study permitted only certain locked pulse density settings. Hence, 500 × 500 PPI, 1000 × 1000 PPI, and 1000 × 2000 PPI were compared (x-by y-direction). The heatmap in Fig. 2 is collected at 1000 × 1000 PPI which resulted in a particularly good set of electrodes at a broader range of settings. Some power/speed combinations at the higher pulse density of 1000 × 2000 PPI yielded even better conductivity and probably electron transfer. However, these electrodes were sometimes prone to delamination and the design-space for power and speed was narrower at the increased pulse density (see Fig. S3). Therefore, the medium setting of 1000 × 1000 PPI was used in this work to create LIG electrodes for sensing applications. Furthermore, in terms of output, scribing the same pattern at 2000 PPI in y-direction takes twice as long compared to 1000 PPI. Power/speed heatmaps recorded at other pulse density settings can be found in Fig. S3 along with a comparison of CV performance among the best electrodes obtained at different pulse density settings in Fig. S4. Electrodes created with the lowest spatial pulse density option of 500 × 500 PPI ranked lowest in conductivity and were deemed less useful for electrochemical measurements.
It should be mentioned that the exact results of Fig. 2 (and the other maps) are likely not directly translatable to machines with a different maximum laser power or different size of focused beam. Even the size of the desired pattern or placing the substrate too close to the limit of the scribing area can influence the carbonization outcome. However, from experience with different flatbed laser systems, we have found that optimal power/speed conditions always follow more or less the green diagonal region in Fig. 2. Specifically, as a rule of thumb, it seems that low power, low speed, and high spatial pulse density will create LIG electrodes with high conductivity and generally good electron transfer behavior at the electrode-electrolyte interface.
Based on these findings, the setting of 1% power, 10% speed, and 1000 × 1000 PPI (1/10/1000 × 1000) was selected as a very conductive, electrochemically active, and mechanically sturdy LIG material. Unless different settings are specifically mentioned, these settings were used to produce electrodes for all following electrochemical tests, and scribing took about 1 min per electrode. It should be pointed out though that many of the settings within the green range of the heat map (Fig. 2) can be suitable for a user's electroanalytical needs and be created at higher throughput.
To demonstrate the variation in chemical sensing performance within the same pulse density setting, three electrode types made with increasing energy input were chosen from the green area in Fig. 2: 1/10, 25/40, and 60/100 (the first number denotes %power, the second %speed), and SWV responses were recorded to standards of [Fe(CN) 6 ] 3− in concentrations between 1 and 100 μM (Fig. 4). The sensitivity (see Fig. S16) decreases from type 1/10 over 25/40 to 60/100 which correlates with rising sheet resistance of the different LIG types: (26 ± 0.6) Ω, (49 ± 1.5) Ω, (57 ± 6) Ω for 1/10, 25/40, and 60/100, respectively. While all three electrode types appeared visually homogenous, the microstructure of 1/10 appears most uniform and flat while 25/40 shows fibrous structures of LIG and gaps. This appearance is even more pronounced in LIG type 60/100. 6 ] in PBS with electrode leads bare or covered with silver paint. Higher speed setting correlated with larger ΔE p (laser power = 30%, 1000 × 1000 PPI). b Peak-to-peak separation in CV. c sheet resistance of LIG vs scribing speed One concern about electrode manufacturing reproducibility regarded the location of the polyimide substrate in the machine when being scribed, since the lens carrier might be slower in some regions than in others due to available acceleration distance. Electrodes were thus scribed in several relative locations on the engraving table and peak-to-peak distances were measured. Also, 1-× 1-cm squares of LIG were made in the same locations to measure the sheet resistance. It was found that substrate location had no significant impact on performance in CV (Fig. 5a), measured as peak-to-peak separation. All electrodes showed around 100-mV peak-to-peak separation but the-overall low-sheet resistance values dropped from 33 Ω at the top left location to 23 Ω in the bottom right position (Fig. 5b). We conclude that while the effect of location on sheet resistance may be kept in mind, for most electrochemical experiments, no special attention needs to be paid as to where the substrate is placed during electrode manufacture.
It is known that the presence of doping atoms such as nitrogen can increase the overall conductance of graphenelike materials [24]. It was therefore investigated if such doping could be achieved by simply changing the gas environment during the scribing process. Therefore, the laser cutter was flooded through a port in the chamber with either nitrogen or argon during operation. While this simple approach admittedly did not create a perfect gas atmosphere, it can quickly be realized in any lab. The LIG surfaces prepared under argon and nitrogen became very hydrophobic with water contact angles of 170°and 150°respectively, while the sessile drop spread completely on LIG prepared under ambient atmosphere. The gas environment influenced the microstructure (Fig. S9) and also the Raman spectrum (Fig. S10) but oddly seemed not to cause significantly different outcomes in cyclic voltammetry (Fig. S11). In terms of large-scale fabrication, this indicates that the ambient atmosphere is sufficient to produce high-quality electrodes. More in-depth investigations regarding the influence of lasing atmosphere on spectral characteristics and hydrophobicity of LIG were published by Li et al. and Mamleyev et al. [12,25] Physicochemical characterization Infrared reflectance spectra of the Kapton substrate and LIG are displayed in Fig. S5A. The features of polyimide in the fingerprint region between 600 and 1800 cm −1 completely disappeared and the overall transmission dropped profoundly across the whole spectrum after carbonization. Both observations confirm the chemical transformation to carbonaceous LIG.
The carbon content increased from roughly 68% in Kapton to above 93% in LIG (Fig. S5B), indicative of carbonization, while contents of hydrogen and nitrogen decreased from 3 and 7% to values below 1%, likely being released as gases during the scribing process. The oxygen content dropped to 5% after scribing, which points toward the presence of oxygencontaining groups in the carbon lattice of LIG, which were also found via XPS analysis (Fig. S7). The Raman spectrum of LIG features the characteristic D, G, and 2D peaks known from graphene-like materials (Fig. S6).
SEM micrographs of porous LIG of the type 1/10/1000 × 1000 are shown in Fig. 6. A pattern of horizontal trenches is visible at low magnification (Fig. 6b), which was created when the pulsed laser beam passed over the substrate in successive lines from top to bottom with a pitch of about 25 μm. Given this pitch and the beam diameter of approximately 125 μm, each location on the surface specified by the pattern As seen in the cross-sectional view in Fig. 6d, a significant part of the Kapton substrate, around 100 μm, remains intact after lasing and serves as a support for the more delicate coralreef-like LIG structure which has an average height of 27 ± 3 μm as determined by SEM.
SEM pictures of LIG obtained at other scribing conditions revealed significantly different morphologies. This observation has already been disclosed by others, e.g., Tiliakos et al. identified five different morphic groups of LIG with differences in Raman spectra, electrical conductivity, and wettability [8]. They used a galvanometric laser processing unit which permitted a high degree of variation in scan rate and pulse frequency. On the other hand, Duy et al. reported the creation of long LIG-fibers (LIGF), by decreasing the pulse frequency while at the same time using a smaller beam diameter, effectively reducing the beam overlap [26]. The commercial flatbed laser processing unit used by their group is similar to the one in this study and we obtained LIGF by reducing pulse densities to 500 PPI but still using a beam diameter of 125 μm (Fig. S12). Apparently, the fibrous structure was also created when a certain beam overlap occurs, in this case approx. 4 times. In fact, even at 1000 × 1000 PPI, brush-like LIG structures could be observed, given the right power/ speed combination, as seen in Fig. 4f above. Correlating the morphology to the electroanalytical performance in Fig. 4e, it can be deduced that large brushlike structures are not favorable for electroanalysis as likely looser structures lead to a higher resistance and hence worse electroanalytical performance. A possible gain in overall surface area hence does not translate here into better electrodes for analysis.
The electrochemically active surface area (ESA) of LIG electrodes, determined with the voltammetric method via the Randles-Sevcik equation, was 0.107 cm 2 (about 1.8 times the geometrical surface area A GEO , Fig. 7b) and the calculated effective heterogeneous electron transfer coefficient (k 0,eff ) for the [Fe(CN) 6 ] 3−/4− couple was 0.003 cm s −1 .
For comparison, commercial screen-printed carbon electrodes exhibited a lower calculated ESA of 0.9 times A GEO and a comparable k 0,eff of 0.002 cm s −1 . However, the Randles-Sevcik relationship-like the Nicholson method for the determination of k 0,eff -is only strictly applicable to smooth electrode surfaces with planar semi-infinite diffusion of dissolved redox species. Therefore, the values of ESA and k 0,eff reported here should be regarded as an estimate rather than accurate. The specific surface area of LIG 1/10/1000 × 1000 determined by nitrogen adsorption isotherm analysis was approx. 330 times the geometrical surface area. Obviously, this does not correlate to the determined ESA. We assume that the electrochemical experiment solution may not enter all of the pores and cavities due to hydrophobic pouches and that some areas are not conductively connected well enough, which leads to this dramatic difference in measurements. Table 1 compares the values of ESA, k 0,eff , and sheet resistance to previously published data for LIG or very similar material (LSG). The values we report here are in the spectrum of previously reported results.
Finally, electrochemical impedance spectroscopy revealed extremely low impedance values compared to commercial screen-printed carbon electrodes of the same geometrical size (Fig. S13) We conceive that especially this characteristic will make LIG an interesting material for EIS sensors as also demonstrated by us and other groups previously [29,31,32].
Voltammetric applications of LIG electrodes
The electrochemical behavior of various molecules of interest on LIG electrodes was investigated by CV to test the broad applicability of this electrode type for chemical sensing (Fig. S15). We observed that [Ru(NH 3 ) 6 ] 3+ , a representative for molecules that undergo outer-sphere electron transfer, was just as easily detected as [Fe(CN) 6 ] 3− , which is classified as a more surface-dependent redox species (Fig. S15A) [33]. The detection of dopamine (DA) can generally be hindered by the presence of ascorbic acid (AA) or uric acid (UA) which oxidize at very similar potentials. In Fig. S15B, the peaks of DA, AA, and UA are sufficiently separated on LIG as also shown by our collaborators earlier [14]. This beneficial effect might be partly explained by the transition from a planar semi-infinite diffusion regime to thinlayer diffusion, as Compton's group has pointed out in the past about the modification of glassy carbon electrodes with nanomaterials [34]. Figure S15C and D demonstrate the detection of p-nitrophenol and paracetamol, which could be direct analytes of interest, while a CV of the common redox mediator methylene blue (MB) is shown in Fig. S15E. MB adsorbs easily onto LIG, as indicated by the very low peak separation and increased currents in consecutive scans and, in fact, Rathinam et al. already suggested LIG powder as adsorbent for MB in water treatment [35]. Since MB is also used as an electron mediator in biosensors, the strong adsorption may be beneficial in that case. Generally, an overall high background capacitance due to the large inherent surface area can be observed on LIG electrodes, paired with an excellent ability for oxidation and reduction reactions of inner and outer-sphere electroactive species. We observed that the current response of different redox species was significantly larger on LIG compared to commercial screen-printed carbon or glassy carbon electrodes. This may be caused by not only the porous nature of LIG but also the apparent presence of many reactive edge sites (see the large D peak in the Raman spectrum of Fig. S6) which may cause this improved electrocatalytic activity [36]. This demonstrates the overall utility of LIG as sensitive electrode material for analytical applications.
Finally, we compared the detection of [Fe(CN) 6 ] 3− on the chosen LIG electrode type using chronoamperometry (CA), cyclic voltammetry (CV), and square-wave voltammetry (SWV). Detection via CV below a concentration of 50 μM was not possible (Fig. 8c, d), the lowest detectable concentrations with CA and SWV were 25 μM and 5 μM respectively. Upwards, the response was linear to the highest tested concentration of 500 μM with CA and CV, while SWV allowed a linear calibration up to 100 μM with signals increasing less at higher concentrations (only linear part of calibration displayed in Fig. 8f).
The SWV calibration of K 3 [Fe(CN) 6 ] on LIG electrodes in Fig. 8e, f can be compared to data from commercially available screen-printed carbon electrodes (Dropsens, DRP-110) in Fig. S14. Our LIG electrodes express a similar limit of detection (LOD and LOQ with SPE = 1.0 and 3.0 μM) although the screen-printed electrodes show less background current.
Conclusion
Laser-induced graphene (LIG) was investigated with focus on understanding the effect fabrication parameters and morphologies have on the electroanalytical performance. It was found that the material properties can be tuned in the production process, and, in fact, a large range of workable parameters was found, which provides good leverage for large-scale, maximum productivity at sufficient electrode performance, tailored toward the desired application. The best analytical electrodes were obtained at low, frequent energy input, i.e., low laser power, low scanning speed, and high spatial pulse density. Key features of the material are its large porosity, sufficient mechanical stability, and chemical purity as no binder substances are required such as in screen-printed graphite electrodes. Thorough electroanalytical characterization and application to a variety of analyte molecules reveal LIG to serve well for detection in the low micromolar range and provide reliable data. As to be expected for a crystalline carbon material, some chemical species irreversibly and readily adsorbed onto LIG, which suggests its strength for single-use electrode systems. This works well with most healthcare applications, where the single-use approach dominates the diagnostic market. The nature of the laser patterning process poses two main constraints on LIG: the features cannot be much smaller than 100 μm and the resulting material is always porous and never flat. Although recently, a group reported LIG with a slightly higher resolution of 50 μm by switching the IR-laser for UV [37], a much lower feature size on the scale of a few μm or even nanometers, as achieved by lithography, is generally impossible for LIG. It is therefore not suited to create nanoscale interdigitated arrays, e.g., for detection strategies using redox cycling [38]. However, the resolution is comparable to screen-printing and LIG is very well suitable for general use [39].
A perceivable drawback of the binder-free LIG electrode material is its lower mechanical stability. For example, the highest conductive LIG cannot be used as an electrode as it lifts off its substrate when slightly bent. It has been proposed to render LIG mechanically more robust by infusion of silicon [40,41] or cement [42], but electrode surface area and superior electron transfer processes suffer from this. Thus, instead, it is proposed that a balance between optimal conductivity and mechanical robustness needs to be found and tailored toward the final application.
|
2021-04-08T13:45:15.020Z
|
2021-04-07T00:00:00.000
|
{
"year": 2021,
"sha1": "9fb99e605746ce0277b09f4a9a3db2cb467ac25f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00604-021-04792-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d3980fd54ac594cae7a51a41a3212a65e3576168",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
411776
|
pes2o/s2orc
|
v3-fos-license
|
Simulated Microgravity and Recovery-Induced Remodeling of the Left and Right Ventricle
Physiological adaptations to microgravity involve alterations in cardiovascular systems. These adaptations result in cardiac remodeling and orthostatic hypotension. However, the response of the left ventricle (LV) and right ventricle (RV) following hindlimb unloading (HU) and hindlimb reloading (HR) is not clear and the underlying mechanism remains to be understood. In this study, three groups of mice were subjected to HU by tail suspension for 28 days. Following this, two groups were allowed to recover for 7 or 14 days. The control group was treated equally, with the exception of tail suspension. Echocardiography was performed to detect the structure and function changes of heart. Compared with the control, the HU group of mice showed reduced LV-EF (ejection fraction), and LV-FS (fractional shortening). However, mice that were allowed to recover for 7 days after HU (HR-7d) showed increased LVIDs (systolic LV internal diameter) and LV Vols (systolic LV volume). Mice that recovered for 14 days (HR-14d) returned to the normal state. In comparison, RV-EF and RV-FS didn't recover to the normal conditions till being reloaded for 14 days. Compared with the control, RVIDd (diastolic RV internal diameter), and RV Vold (diastolic RV volume) were reduced in HU group and recovered to the normal conditions in HR-7d and HR-14d groups, in which groups RVIDs (systolic RV internal diameter) and RV Vols (systolic RV volume) were increased. Histological analysis and cardiac remodeling gene expression results indicated that HU induces left and right ventricular remodeling. Western blot demonstrated that the phosphorylation of HDAC4 and ERK1/2 and the ratio of LC3-II / LC3-I, were increased following HU and recovered following HR in both LV and RV, and the phosphorylation of AMPK was inhibited in both LV and RV following HU, but only restored in LV following HR for 14 days. These results indicate that simulated microgravity leads to cardiac remodeling, and the remodeling changes can be reversed. Furthermore, in the early stages of recovery, cardiac remodeling may be intensified. Finally, compared with the LV, the RV is not as easily reversed. Cardiac remodeling pathways, such as, HDAC4, ERK1/2, LC3-II, and AMPK were involved in the process.
INTRODUCTION
There are various changes in the human cardiovascular system, including a cephalic fluid shift (Thornton et al., 1987), changes in cardiac systolic volume (Bungo et al., 1987;Caiani et al., 2006), and over time, a loss of left ventricular mass due to microgravity during space flight (Perhonen et al., 2001;Summers et al., 2005). The adaptation and adjustments that characterize the responses to the metabolic demands of activity and gravitational loading on Earth are changed dramatically under conditions of microgravity. Chronic reduction in metabolic demand and oxygen uptake reduces the demand on cardiac output and tissue perfusion, resulting in cardiac atrophy and a decline in function, and further leads to orthostatic intolerance upon return to full gravity with the potential risk of irreversible structural changes that may become pathological (Marcus et al., 1977;Zile et al., 1993;Perhonen et al., 2001). Because of this, it is essential to determine the severity of cardiac changes upon return to the ground. Many studies have demonstrated that cardiac remodeling induced by microgravity and/or simulated microgravity is associated with a decline in cardiac function. However, the changes in heart structure and function during reloading following simulated microgravity are not wellunderstood.
An abundance of data has provided insight into the changes that occur in the left ventricle (LV; Summers et al., 2005;Westby et al., 2016). There are no data, however, on remodeling in the right ventricle (RV) under weightlessness due to reduced gravitational loading. The changes in the LV and RV that occur in astronauts during space flight and their subsequent return to the ground are poorly understood. On the cellular level, the remodeling responses of the LV and RV to pressure overload are largely similar. There are several major signaling molecules involved in cardiac remodeling induced by external or intrinsic stimuli, including HDAC4, AMP-activated protein kinase (AMPK), ERK1/2, and LC3-II. However, there is a divergence in the molecular mechanisms of the RV compared with the LV under stress conditions (Reddy and Bernstein, 2015). The difference in the responses of the LV and RV to simulated microgravity as well as the signaling molecules involved in this process need to be explored further.
From the perspective of the cardiovascular system, rodent hindlimb unloading (HU) is a suitable model. There is extensive literature investigating the cardiovascular adaptation to simulated microgravity, predominantly using the HU rat or mouse model (Hasser and Moffitt, 2001). The mouse demonstrates a wide range of cardiovascular responses to HUsimulated microgravity, including alterations in heart function, heart rate, exercise capacity, peripheral arterial vasodilatory responsiveness, and the baroreflex response (Powers and Bernstein, 2004). Many of these responses are similar to those seen in humans. Following 28 days of HU-simulated microgravity, mice manifest many of the cardiovascular alterations that have been previously demonstrated in humans during space flight (Buckey et al., 1996;Fritsch-Yelle et al., 1996;Powers and Bernstein, 2004).
Here, we suppose that HU can lead to distinct remodeling of the LV and RV in mice, and that reloading after HU has a further effect on left and right ventricular remodeling. In this study, we detected the remodeling signals and structural changes of the LV and RV following HU and hindlimb reloading (HR). We determined that pathological remodeling signals are overactive in both the LV and RV following HU and/or HR, and are restored after 14 days of reloading. The physiological remodeling signal AMPK is downregulated in both the LV and RV, which leads to the functional decline of both ventricles. Finally, we found that recovery is more difficult in the RV than the LV. This study provides insight into the molecular mechanisms of cardiac remodeling and the decline of systolic function of both the LV and RV during simulated microgravity and recovery.
Animals
All mice used in the experiments were bred and maintained at the SPF Animal Research Building of China Astronaut Research and Training Center (12-h light, 12-h dark cycles, temperature controlled for 23 • C and free access to food and water). The mice used on this study were 3 month old males and in a C57BL/6N background. The experimental procedures were approved by the Animal Care and Use Committee of China Astronaut Research and Training Center, and all animal studies were performed according to approved guidelines for the use and care of live animals.
Hindlimb-Unloading Model
The hindlimb-unloading procedure was achieved by tail suspension, as described by Morey-Holton and Globus (2002). Briefly, the 3-month-old mice were individually caged and suspended by the tail using a strip of adhesive surgical tape attached to a chain hanging from a pulley. The mice were suspended at a 30 • angle to the floor with only the forelimbs touching the floor, which allowed the mice to move and access to food and water freely. The mice were subjected to hindlimb unloading through tail suspension for 28 days, which we will identify as the "unloaded" state, for a total of 28 days, after which they were returned to the normal four-extremity weight bearing "reloaded" position (hindlimb reloading, HR). Similar numbers of control mice of the same strain background were instrumented and monitored in similar fashion under identical cage conditions but without tail suspension.
Histological Analysis
Sections were generated from paraffin embedded hearts, and were stained with H&E for gross morphology, Masson's trichrome for detection of fibrosis, as described before Ling et al. (2012).
RNA Extraction and Real-Time Polymerase Chain Reaction
Total RNA was extracted from heart tissues by using RNAiso Plus reagent (Takara) according to the manufacturer's protocol.
Transthoracic Echocardiography
Animals were lightly anesthetized with 2,2,2-tribromoethanol (0.2 ml/10 g body weight of a 1.2% solution) and set in a supine position. Two dimensional (2D) guided M-mode echocardiography was performed using a high resolution imaging system (Vevo 770, Visual-Sonics Inc., Toronto, ON, Canada). Two-dimensional images are recorded in parasternal long-and short-axis projections with guided M-mode recordings at the midventricular level in both views. Left ventricular (LV) cavity size and wall thickness are measured in at least three beats from each projection. Averaged LV wall thickness [interventricular septum (IVS) and posterior wall (PW) thickness] and internal dimensions at diastole and systole (LVIDd and LVIDs, respectively) are measured. LV fractional shortening ((LVIDd -LVIDs)/LVIDd), relative wall thickness [(IVS thickness + PW thickness)/LVIDd], and LV mass [LV Mass = 1.053 × [(LVIDd + LVPWd + IVSd)3 -LVIDd3]] are calculated from the M-mode measurements. LV ejection fraction (EF) was calculated from the LV cross-sectional area (2-D short-axis view) using the equation LV %EF = (LV Vold -LV Vols)/LV Vold × 100%. For RV, Two-dimensional images are recorded in right parasternal long-and short-axis projections with guided M-mode recordings at the maximum diameter level in both views. Right ventricular (RV) cavity size and wall thickness are measured in at least three beats from each projection. Averaged Right Ventricle Anterior Wall and internal dimensions at diastole and systole (RVIDd and RVIDs, respectively) are measured. Right Ventricle Percent Fractional Shortening (RVIDd -RVIDs)/RVIDd × 100%). Right Ventricle Percent Ejection Fraction (EF) was calculated from the RV cross-sectional area (2-D short-axis view) using the equation RV %EF = (RV Vold -RV Vols)/RV Vold × 100%.
For the RV mass weight calculation, firstly, we measured the RV endocardial borders in diastole (five measures) and systole (five measures) in five consecutive cardiac cycles in each flatimage, generating 10 RV endocardial areas (RVendo). Then, we got the epicardial borders and corresponding RV epicardial areas (RVepi) on the same frames. Ten RV free wall areas were then calculated by subtracting RVendo from RVepi (RVepi-RVendo). Finally, the total RV free wall volume of each plane was calculated by Simpson's method using the mean of the RV free wall area. RV free wall mass was obtained by multiplying this volume by the specific density of the myocardium (1.05 g/cc).
Statistical Analysis
Data are presented as mean ± SEM per experimental condition. For the statistical differences among groups, considering the presence of unequal variance for the data, we firstly test the equality of variances across groups. If it shows the variances are unequal, we then use Welch's t-test for one-way analysis. Otherwise we use Student's t-test. Bonferroni adjustment was used for multiple comparisons. P < 0.05 is considered statistically significant. P < 0.01 is considered very significant. All the statistical tests are analyzed by Prism software (Graphpad prism for windows, version 5.01).
Heart Weight and Body Weight Changes Following HU and Recovery
Three groups of mice were subjected to HU by tail suspension for 28 days following which two groups were allowed to recover for 7 or 14 days (HU-28d, n = 10; HR-7d, n = 10; HR-14d, n = 10). The control group (n = 10) was treated equally, with the exception of tail suspension ( Figure 1A). Body weight and heart weight were recorded before sacrifice (Figures 1B,C), and the mass of the LV and RV were calculated by echocardiography (Figures 1E,F). Compared with the control group, body weight showed an overall decrease, while heart weight increased following HU and HR. Thus, the ratio of heart weight to body weight increased FIGURE 1 | Heart weight analysis following hindlimb unloading and recovery. (A) , Timeline summarizing experimental design. Body weight (B), heart weight (C), ratio of heart weight to body weight (D), LV mass (E), RV mass (F), and heart rate (G) of mice following hindlimb unloading (HU) and recovery. Ctrl-28d, control mice; HU-28d, mice following 28 days of HU; HR-7d, mice following 7 days of recovery after HU; HU-14d, mice following 14 days of recovery after HU. Values are means ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001.
Frontiers in Physiology | www.frontiersin.org following 28 days of HU and 7 days of HR ( Figure 1D). LV mass calculated by echocardiography remained unchanged following HU or HR ( Figure 1E), but RV mass exhibited an overall decrease following HU, and recovered following HR ( Figure 1F). All echocardiographic measurements were made while mice maintained heart rates of 450 ± 50 beats per minute ( Figure 1G).
Changes in Left and Right Ventricular Function Following HU and Recovery
To validate the effects of HU and HR on the LV and RV, transthoracic echocardiography was performed to determine ventricular function following HU and HR. After 28 days of HU, left ventricular fractional shortening (LV-FS), and left ventricular ejection fraction (LV-EF) decreased significantly in HU mice compared with the control, and did not recover during the first few days of HR. Full recovery was only apparent after reloading for 14 days (Figure 2A). However, after 28 days of HU, RV-FS and RV-EF decreased significantly in HU mice compared with the control, and did not recover even when reloaded for 14 days ( Figure 2B). The results indicate that simulated microgravity can induce a decline in left and right ventricular function, and that recovery is slower in the RV after reloading.
Changes in Left and Right Ventricular Structure Following HU and Recovery
To validate the influence of HU and HR in the LV, we performed transthoracic echocardiography to determine the structure of the LV following HU and HR ( Figure 3A). Compared with control, the end-systolic LV internal diameter (LVIDs), and the enddiastolic LV internal diameter (LVIDd) did not change in the HU-28d mice, but increased following HR for 7 days, and recovered after 14 days of HR (Figures 3B,C). Furthermore, the change in end-systolic LV volume (LV-Vols) and the end-diastolic LV volume (LV-Vold) was the same as for LVIDs and LVIDd (Figures 3D,E). The LV posterior wall thickness (LVPW) and the LV anterior wall thickness (LVAW) didn't change following HU or HR (Figures 3F-I). Together, these data show that reloading after HU can induce enlargement of the left ventricular internal diameter and volume.
To validate the influence of HU and HR in the RV, we performed transthoracic echocardiography to assess the structure and function of the RV following HU and HR ( Figure 4A). After 28 days of HU, the end-diastolic RV internal diameter (RVIDd) and the end-diastolic RV volume (RV-Vold) decreased in the HU mice compared with control, but recovered to its normal state in the HR-7d and HR-14d groups (Figures 4B,D). Furthermore, the end-systolic RV internal diameter (RVIDs) and the endsystolic RV volume (RV-Vols) did not change following HU, but increased in the HR-7d and HR-14d groups (Figures 4C,E). The RV anterior wall thickness (RVAWd and RVAWs) and the interventricular septal thickness (IVSd and IVSs) did not change following HU or HR (Figures 4F-I). Together, these data show that HU and HR induce different structural changes in the LV and RV (Figure 4J) and, during the recovery process, cardiac remodeling may be intensified because of reloading.
HU and HR Lead to Cardiac Remodeling
To address the influence of HU and HR in the LV and RV, hearts from mice following HU and HR were assessed for changes in morphology and gene expression. Histological analysis showed heart remodeling occurred following HU and HR. In hematoxylin and eosin-stained (HE) sections, gross evidence of edema was easily observed by separation of the myofibers in the LV, the interventricular septum (IVS), and the RV following HU. Recovery to a normal state occurred following 14 days of HR ( Figure 5A). Masson trichrome staining (MTT) showed a deeper staining of collagen in the LV, the interventricular septum, and the RV following HU. These changes also recovered after 14 days of reloading. In the HU-28d and HR-7d groups, relative Col1a1, Col3a1, and BNP mRNA levels increased in the LV, and recovered, although not significantly ( Figure 5B). In the RV, the relative mRNA levels of Col1a1 and Col3a1 increased in the HU-28d group, and only the level of Col1a1 recovered to its normal state in the HR-14d group. The relative mRNA level of BNP increased following HU, and continued to increase after 7 days of HR ( Figure 5C). These results demonstrate that HU leads to slight fibrosis and remodeling in both the left and right ventricle, and recovery occurs slowly after 14 days of reloading.
Changes in HDAC4, AMPK, ERK1/2, and LC3-II Activity in the Left and Right Ventricles Induced By HU and HR
To gain more insight into the signaling pathways involved in the declining function of both the left and right ventricles, we examined several important signaling molecules involved in cardiac remodeling induced by external or intrinsic stimuli. Figure 6A, quantification of phosphorylation levels normalized to total protein in the LV revealed that HDAC4 phosphorylation at Ser246 did not change following HU but increased following 7 days of reloading, and was fully restored after 14 days. Compared with the control, Erk1/2 phosphorylation at Thr202/Tyr204 increased following HU, remained the higher level during the first 7 days of reloading, and was fully restored after 14 days. The phosphorylation level of AMPK at Thr172 decreased following HU, continued to decrease after 7 days of reloading, and was restored after 14 days. According to the research of Liu et al. (2015), autophagy is involved in HU-induced decline in heart function, so we assessed the change of LC3-II in , and BNP in the LV and RV were analyzed by qPCR, n = 7. H&E, hematoxylin and eosin; MTT, Masson trichrome staining; Col1a1, alpha-1 type I collagen; Col3a1, alpha-1 type III collagen; BNP, brain natriuretic peptide; qPCR, real-time polymerase chain reaction. Values are means ± SEM (n = 7). *P < 0.05, **P < 0.01, ***P < 0.001. our model. Quantification of LC3-II levels normalized to LC3-I revealed an increased ratio of LC3-II: LC3-I following HU, and this ratio did not return to its normal state until after 14 days of reloading. The changes of these signaling pathways in RV following HU and HR were also analyzed. As shown in Figure 6B, the level of HDAC4 phosphorylation at Ser246 in the RV increased following HU, however it was not restored to the normal level until 14 days of reloading, which is in contrast to the changes in LV. Phosphorylation of Erk1/2 at Thr202/Tyr204 was up-regulated increased following HU, and recovered after 7 days of reloading. The phosphorylation of AMPK at Thr172 was reduced following HU, and continued to decrease after 7 days of reloading, however, it was not restored to its normal state until 14 days after reloading, which is different from the changes in the LV. The changes of LC3-II: LC3-I was the same as that for the LV.
As shown in
We also analyzed the changes of mTOR phosphorylation and MuRF1/Atrogin1 levels, which are involved in protein synthesis and degradation pathways, respectively. In LV, As is shown in Figure S1A, mTOR phosphorylation at S2448 was decreased following HU, and was restored following 7 days of reloading. In comparison with the control, the level of Atrogin1, an E3 ubiquitin ligase that mediates proteolysis events during muscle atrophy, obviously increased following HU, and remained much higher level during the first 7 days of reloading, then restored to the normal level after 14 days of reloading. The level of MuRF1, another ubiquitin ligases, was increased in HU group, however, it did not recover to the normal condition even after 14 days of reloading ( Figure S1A). In RV, as shown in Figure S1B, the changes of phosphorylation level of mTOR was the same as that for the LV. The levels of Atrogin1 and MuRF1 were both substantially increased following HU, and restored till 14 days of reloading ( Figure S1B). The changes of these signaling pathways were closely related to disorders of cardiac function observed both in the LV and RV and the differences between them.
DISCUSSION
In this study, we report for the first time the differences between left and right ventricular remodeling induced by simulated microgravity and reloading. We also characterize the signaling molecules involved in this cardiac remodeling. Consistent with previous reports, our study indicates that left ventricular function declines following HU but recovers to its normal state after 14 days of reloading. Few studies have focused on the RV, however. We show here that the function of the RV also declines following HU, but does not recover even after 14 days of reloading. In other words, both the left and right ventricle exhibited a decline in function, but recovery of right ventricular function was much more difficult. We demonstrate that pathological remodeling signals, such as HDAC4, were activated following HU and recovered following HR in both the LV and RV. The physiological remodeling signal AMPK was inhibited in both the LV and RV following HU, but only restored in the LV following 14 days of HR.
Several studies have suggested that microgravity or simulated microgravity lead to cardiac remodeling, and result in cardiac deconditioning when reloaded. In humans exposed to 6 weeks of bed rest, LV mass decreased by 8.0 ± 2.2%, RV free wall mass decreased by 10 ± 2.7%, and RV end-diastolic volume decreased by 16 ± 7.9%. After 10 days of spaceflight, LV mass decreased by 12 ± 6.9%. Thus, cardiac atrophy occurs during prolonged horizontal bed rest, but may also occur after short-term spaceflight (Perhonen et al., 2001). Using an experiment with 60 days of sedentary head-down bed rest, one group demonstrated that the reduced LV mass in response to prolonged simulated weightlessness is not simply due to tissue dehydration but rather to true LV remodeling that persists well into recovery (Westby et al., 2016). Previous studies conducted on rats and mice have provided conflicting data. Bigard et al. (1994) demonstrated that LV mass decreased following HU for 21 days in rats. Ray et al. (2001), however, suggested that the mass of the rat heart was unchanged after 28 days of HU. Jennifer et al. (Powers and Bernstein, 2004) also reported that absolute heart weights were not altered significantly after 14 days of tail suspension in mice. Moreover, few studies have focused on right ventricular remodeling induced by space flight or simulated microgravity. In our research, LV mass and structure did not change following 28 days of HU, consistent with some of the previous studies, but RVIDd and RV Vold did decrease, and RV mass trended slightly lower following HU. So, we suggest that the RV is more sensitive than the LV following HU.
Many studies have demonstrated that cardiac remodeling is associated with a decline in heart function induced by microgravity and/or simulated microgravity. However, the changes in heart structure and function during reloading after simulated microgravity are not clear. In our study, heart weight increased significantly following HR for 7 days compared with the HU group, and recovered after 14 days of HR ( Figure 1C). The masses of both the LV and RV increased following 7 days of HR, although the changes were not significant (Figures 1E,F). Echocardiography revealed that LVIDd, LVIDs, LV Vold, and LV Vols increased following 7 days of HR, and recovered after 14 days (Figures 3B-E). RVIDs and RV Vols also increased following HR for 7 and 14 days (Figures 4C,E). In summary, both the LV and RV were enlarged following 7 days of HR. These results indicate that simulated microgravity leads to cardiac remodeling, and in the early stages of recovery, reloading may intensify remodeling.
The mammalian heart is a muscle, the fundamental function of which is to pump blood throughout the circulatory system. In response to changed workload, typically caused by pathological or physiological stimulation, the heart undergoes remodeling in an attempt to maintain pump function in the new environment (Maillet et al., 2013). A variety of stimuli can induce the heart to grow or shrink. Exercise, pregnancy, and postnatal growth promote physiologic growth of the heart, FIGURE 6 | Activity of signaling pathways in the mouse heart following hindlimb unloading and recovery. (A,F) Representative western blots of HDAC4 and its phosphorylation at Ser246, AMPKα and its phosphorylation at Thr172, ERK1/2, and its phosphorylation at Thr202/Tyr204, and LC3 of the left ventricle (LV) and right ventricle (RV). Gapdh levels served as a loading control. Quantification of phosphorylation levels normalized to total protein (LC3-II levels normalized to LC3-I levels) of the LV (B-E) and RV (G-J). (K) Summary of changed signaling molecules. HDAC4, Histone Deacetylase 4; ERK, Extracellular Regulated Protein Kinases; AMPK, AMP-activated Protein Kinase; LC3, Microtubule-associated Protein Light Chain 3. Gapdh, Glyceraldehyde phosphate dehydrogenase. Values are means ± SEM (n = 4). *P < 0.05, **P < 0.01, ***P < 0.001.
Frontiers in Physiology | www.frontiersin.org while neurohumoral activation, hypertension, and myocardial injury can cause pathologic hypertrophic growth. As with other forms of cardiac remodeling, ventricular atrophy is induced by prolonged weightlessness during space travel, prolonged bed rest, and mechanical unloading with a ventricular assist device (Hill and Olson, 2008). Well-characterized signaling molecules that regulate cardiac remodeling include HDAC4, AMPK, ERK1/2, LC3-II, mTOR, Atrogin1, and MuRF1. HDAC4, a key member of class IIa HDACs (HDACs 4, 5, 7, and 9), is expressed in numerous tissues, and plays an important role in the modulation of biological responses and pathological disorders (Yang and Grégoire, 2005;Backs and Olson, 2006;Wang et al., 2014). Phosphorylated HDAC4 is exported to the cytoplasm from the nucleus, with consequent activation of MEF2 and its downstream target genes involved in pathological cardiac remodeling (Passier et al., 2000;Haberland et al., 2009;Ling et al., 2012). AMPK is a stress-activated kinase which functions as a cellular fuel gauge and master metabolic regulator, and is therefore crucial to cardiac homeostasis (Coughlan et al., 2014). The activation of heart AMPK is associated with the translocation of GLUT4 and phosphorylation of acetyl-CoA carboxylase (ACC), which promote ATP production by stimulating fatty acid oxidation, glucose uptake, and glycolysis (Coven et al., 2003;Maillet et al., 2013). AMPK is important for maintaining the physiological growth of the heart. ERK1/2 belongs to the mitogen-activated protein kinase (MAPK) family, and its activation has been reported to mediate both pathological and physiological cardiac remodeling (Tham et al., 2015). According to the research of Liu et al. (2015), autophagy is involved in HU-induced LV decline in function; LC3-II expression increased in the LV after HU. mTOR is an atypical serine/threonine protein kinase that belongs to the phosphoinositide 3-kinase (PI3K)-related kinase family and interacts with several proteins to form two distinct complexes named mTOR complex 1 (mTORC1) and 2 (mTORC2; Laplante and Sabatini, 2012). In muscle, activation of mTORC1 can stimulate protein synthesis to drive muscle hypertrophy (Philp et al., 2011). MuRF-1 and Atrogin 1 are two identified muscle specific ubiquitin ligases, which have been shown to be upregulated prior to the onset of atrophy in multiple models of muscle wasting (Bodine et al., 2001). In this study, we detected these molecular signals in the LV and RV after HU and HR, and we explored the molecular mechanism of LV and RV remodeling induced by simulated microgravity and recovery. We found that the phosphorylation of HDAC4 at Ser246 was upregulated following HU for 28 days. This phosphorylation remained high in the RV (Figures 6F,G), and increased in the LV following 7 days of HR (Figures 6A,B). Meanwhile, the phosphorylation levels of ERK1/2 increased in both the LV and RV following HU for 28 days (Figures 6A,C,F,H), and further increased in the LV following 7 days of HR. Our results also showed that the ratio of LC3-II:LC3-I increased in both the LV and RV following 28 days of HU and 7 days of HR (Figures 6A,E,F,J). Autophagy was activated in both the LV and RV, consistent with previous reports (Liu et al., 2015). These results indicate that both HU-simulated microgravity and reloading can activate pathological cardiac remodeling signaling pathways, which can initiate the expression of fetal genes in both the LV and RV, and ultimately lead to cardiac remodeling. Following HU, The phosphorylation of mTOR at S2448 was decreased both in LV and RV (Figure S1), protein synthesis pathway was inhibited. The levels of Atrogin1 and MuRF1 were increased in both LV and RV following 28 days of HU and 7 days of HR (Figure S1), which suggest that ubiquitin-proteasome system was activated both in LV and RV. The changes of these proteins contributed to the cardiac remodeling. The phosphorylation level of AMPK at Thr172 decreased following 28 days of HU and continued to decrease following 7 days of HR in both the LV and RV (Figures 6A,D,F,I). Moreover, the phosphorylation of AMPK returned to a normal level in the LV following 14 days of HR. This did not occur in the RV, however. Interestingly, levels of AMPK phosphorylation were consistent with the functional changes in both the LV and RV. The physiological remodeling signal AMPK decreased following HU in both the LV and RV, and did not return to its normal state in the RV following 14 days of HR. This may at least partially explain the different responses of the RV and LV following HU and HR.
This study provides evidence of the differences in the responses of the LV and RV under simulated microgravity and the signaling molecules involved in this process. We found that simulated microgravity leads to cardiac remodeling, and this remodeling could be reversed. In the early stages of recovery, reloading may intensify cardiac remodeling. Moreover, it is more difficult to restore the changes in the RV compared with the LV. Finally, we identified that following HU and HR, pathological remodeling signals, such as HDAC4, were activated, and physiological remodeling signals, such as AMPK, were inactivated in both the LV and RV, which led to cardiac remodeling and decline of heart function ( Figure 6K).
|
2017-05-05T00:42:59.086Z
|
2016-06-29T00:00:00.000
|
{
"year": 2016,
"sha1": "0fd38c41e8693012c9a0ad6aaea9a6689da7150d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2016.00274/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fd38c41e8693012c9a0ad6aaea9a6689da7150d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
229617354
|
pes2o/s2orc
|
v3-fos-license
|
INTRA-ORGANIZATIONAL ANTECEDENTS OF TALENT MANAGEMENT IN THE CONTEXT OF POSITIVE ORGANIZATIONAL SCHOLARSHIP: A STUDY OF COMPANIES OPERATING IN POLAND
** Institute of Management, WSB University in Torun. ** Faculty of Economic Sciences and Management, Nicolaus Copernicus University in Torun. Purpose: The aim of the study is to empirically validate the influence of talent management antecedents identified in the literature. The concept of talent management is discussed in the context of Positive Organizational Scholarship. This idea helped to prepare a more accurate enumeration of the antecedents of talent management presented in the literature. The analysis of the literature was the basis for developing a set of propositions which constitute the model to be tested empirically. Methodology: The analyses presented in this paper were supported by the data collected in the questionnaire survey conducted among companies operating in Poland in 2012. The examined sample consisted of 73 organizations. Pearson correlation and partial least squares (PLS) path modelling were applied to analyze the causal relations between the variables. Findings: The analysis positively validated the cause-effect relationships between talent management and the following antecedents: talent management infrastructure and organizational culture, organization strategy, and internal communication. Research limitations: The study was limited to companies operating in Poland which established a specific cultural context. The relatively small research sample was another constraint. Therefore, the findings cannot be automatically extended to other organizations. Moreover, in order to reduce the observed ambiguity between causes and effects, quantitative studies should be supported by qualitative surveys based on the case study methodology. Originality/value: The outcomes of the study contribute to the field through the empirical testing of the theoretical assumptions concerning the antecedents of talent management.
INTRODUCTION
According to the assumptions of the resource-based view of strategic management only strategically valuable, rare, non-imitable and non-substitutable resources may become a foundation of a sustainable competitive advantage (Barney, 1991). Due to the fact that talent seems to meet all the aforementioned requirements, talent management is considered to be a key process for organizational development and building competitive advantage (Heinen and O'Neill, 2004;Ashton and Morton, 2005;Lewis and Heckman, 2006). Talent management enables organizations to distinguish from their competitors which makes the role of talent management unquestionable. Talent and talent management have become the research topics of the studies conducted by the Gallup Institute. Moreover, positive psychology driven by Seligman's call to focus psychological studies on human strengths contributed to the rising interest in talent management (Seligman, 2004). The key assumption of this approach is the belief that focusing on human strengths and talents rather than on improving weaknesses provides much more efficiency and effectiveness (Clifton and Harter, 2003). Consequently, the concept of talent management and the idea of organizational development based on strengths have been included into Positive Organizational Scholarship (Cameron et al., 2003a). In order to achieve outstanding outcomes, in-born talent should be connected with knowledge and skills. The synthesis of talent, knowledge and skills is labelled as a strength (Buckingham and Clifton, 2003). Therefore contemporary organizations are recommended to make talents a foundation of their development and enable talents their development, and thus organizations need efficient and effective talent management programmes.
Talent management is a complex and challenging process which requires specific conditions to be implemented and provide positive outcomes. In Poland, talent management processes have not been fully developed yet. Generally, Polish managers are aware of the paramount importance of talent management. Nevertheless, numerous companies still admit that they lack formal talent management solutions or they are significantly weak in their talent management procedures. Therefore, the question arises about the antecedents of talent management. The models of talent management (Ashton and Morton, 2005;Tansley et al., 2007;Pocztowski, 2008;Collings and Mellahi, 2009) often theorize on talent management antecedents and determinants but they do not provide the empirical evidence to confirm these assumptions and identified relationships between variables. Having conducted literature studies of talent management antecedents, the authors have not come across any empirical papers confirming relations between talent management and the factors which determine it. Thus a gap in the knowledge base has been identified which stimulates research in the field.
The aim of the study is to empirically validate the influence of talent management antecedents identified in the literature. The paper consists of two parts: the literature survey and the empirical study. First of all, the idea of Positive Organizational Scholarship is discussed in order to establish the context for further studies. Secondly, the concept of talent management is explored. Thirdly, the antecedents of talent management enumerated in literature are presented. The analysis of the literature is the foundation to develop a set of propositions which constitute the model to be tested empirically. Fourthly, the method of study was described. Then, the analysis of correlations and regressions between the variables in the model was conducted. Finally, the findings from the empirical survey were discussed and the recommendations for further research formulated.
Positive Organizational Scholarship as the context for research in talent management
Positive Organizational Scholarship (POS) is one of the recently emerged concepts of management. In 2002, the Center for Positive Organizational Scholarship (nowadays known as the Center for Positive Organizations) was founded at the Ross School of Business, the University of Michigan. The publication entitled "Positive Organization Scholarship: Foundations of a New Discipline" edited by Cameron et al. (2003a) is considered to be the starting point of the positive approach in management studies. Certainly, Positive Organizational Scholarship refers to previous studies applying the humanistic perspective to management and it shares some similarities with other concepts and approaches in social sciences focusing on the positive aspects of management and human behaviour. The idea of Positive Organizational Scholarship is expressed in the name of the concept. 'Positive' refers to the orientation towards all the positive aspects, states, behaviours and attitudes. It assumes that the success of an organization should be developed on its strengths rather than on the improvement of weaknesses. 'Organizational' refers to the context of the study which is focused on organizational processes and states. 'Scholarship' means that the research methodology and scientific approach are applied to deal with the issues in the field (Cameron et al., 2003b;Pace, 2010). The attention of Positive Organizational Scholarship focuses on "the enablers (e.g. processes, capabilities, structures, methods), the motivations (unselfishness, altruism, contribution without regard to self), and the outcomes or effects (e.g. vitality, meaningfulness, exhilaration, high-quality relationship) associated with positive phenomena" (Cameron et al., 2003b).
Positive Organizational Scholarship represents the humanistic approach to management. Therefore, human resources management and talent management as its element are within the interest of POS studies. Human resources are enumerated among the intangible assets which constitute positive organizational potential (Glińska-Neweś, 2010;Chodorek, 2010). Positive organizational potential is defined as "the set consisting of resources related to strategy, structure, human resources management, power, control, innovations, company's integration and employees' identification, and leadership. Positive organizational potential refers to such characteristics and states of organizational resources that create positive organizational culture and positive organizational climate" (Peyrat-Guillard and Glińska-Neweś, 2010). The concept of positive organizational potential is particularly interesting as it helps explain the relationships between the positive bias of organizations and their development and performance (Haffer, 2010;Haffer, 2013). Hence it should be noted that talent management is identified among the key areas of positive organizational potential. There are also observed some efforts to identify and discuss the antecedents of talent management in the context of positive organizational potential studies (Chodorek, 2013;Karaszewski and Lis, 2014). Nevertheless these studies have a rather exploratory character and they lack the testing of cause-effect relationships between talent management and its antecedents proposed in the literature.
Talent management
Nowadays talent management is considered to be both one of the leading processes and challenges for managing human resources in business organizations (Heinan and O'Neill, 2004;Ashton and Morton, 2005;McCauley and Wakefield, 2006;Ingham, 2006). The talent management concept, which emerged in the U.S. in the 1990s, seems to be a relatively well-grounded issue. However, one may assume that the area is affected by the same recurring problems which have not been solved and which are becoming more and more important. What is more, even the increasing number of studies resulting in business and scientific papers dealing with the issue has not changed the situation significantly.
As observed by Ashton and Morton (2005, p. 30), neither a common approach to nor a common definition of talent management had been yet developed. Although a decade passed since that observation, nothing has changed in this field (Vaiman and Collings, 2013). Talent management is defined at various levels (Lewis and Heckman, 2006) and from different perspectives. Blass et al., (2009) operationalize talent management from the perspectives of: a process, culture, competitiveness, development, human resources planning and change management shifting the centre of gravity in each perspective. Moreover, the issues of global talent management (Tarique and Schuler, 2010) and strategic talent management (Collings and Melahi, 2009) have been introduced to the literature. On the one hand, the focus of talent management is put on selecting the best employees, 'champions' (an elitist approach), but on the other, all employees may be perceived as talents and the role of talent management is to match properly their strengths with best suited positions in a company (an egalitarian approach) (Garrow and Hirsh, 2008;Reilly, 2008).
The most often cited definitions of talent management describe the construct as a set of activities and processes aimed at identifying, attracting, recruiting, selecting, developing, retaining and using high potential employees, who are extremely valuable for an organization (McCauley and Wakefield, 2006;Tansley et al., 2007). Recognizing the abundance and variety of approaches in the literature, the following definition describing the ideal state of talent management is accepted for further analyses and discussions: "The model talent management encompasses the processes of searching, identifying, attracting and recruiting people of above-average intellectual potential and skills as well as developing and applying their capabilities in order to contribute to the company's aspirations and needs" (Chodorek, 2013). Recognizing and selecting the most talented employees is an investment in the company's human capital. As charisma, skills and energy of employees are the key antecedents of company performance, contemporary companies fight the "wars for talents" (Beechler and Woodward, 2009).
In order to result in effective outcomes, talent management must be perceived and implemented in the long-term perspective. One-time programs are ineffective. Such activities are usually expensive and temporary. Their aims are not clearly defined. In effect, employees who acquire new skills and capabilities are not able to make use of them in their workplaces which results in employee frustration. Thus applying the strategic approach to talent management seems to be the prerequisite while identifying the positions of the key impact on an organization is a fundamental activity in the strategy development process (Collings and Melahi, 2009). Strategic talent management enables an organization to build up a relevant architecture of human resources necessary to change the behaviour of employees in order to improve their effectiveness. The strategic approach to managing talent facilitates identifying the key activities of a company and to define specific requirements for employees to achieve strategic objectives at the highest possible level (Minbaeva and Collings, 2013;Guthridge, et al., 2008).
The strategic approach to talent management enables a company to plan the requirements for specific talents. Due to a long-term planning perspective, an organization is able to define who (a person characterized by a set of given capabilities) is necessary for its development and verify whether it has such an employee (Asthon and Morton, 2005). As a consequence, an organization knows what kind of talent is needed, which enables managers to plan for talents. Detailed job descriptions support such a planning process. Defining competencies is the foundation for searching talents both within an organization and outside it (Pilbeam and Corbridge, 2010).
The relevant methods and tools of talent identification, recruitment and selection are necessary for managing talents in an efficient and effective way. The analysis of recruitment forms, psychological tests, assessment centres and interviews have become the prerequisites for identifying, recruiting and selecting high potential employees (Elegbe, 2010).
Another important aspect of talent management is building such a position of a company on the market to attract the best employees. The activities aimed at creating the company image as an optimum choice for a working place are defined as employer branding (Surmacz and Bociąga, 2011). The scope of such activities is wide and their aim is to highlight the distinctive features of an organization (of economic, psychological, cultural, functional character) which may be attractive for a potential, talented employee (Yaqub and Khan, 2011).
Retaining talented employees and maintaining the high level of their commitment is a challenge for a company. Therefore, companies continuously make efforts to seek an answer to the question of how to retain talented personnel (Kaye and Jordan-Evans, 2012). This is a complex issue determined by the variety of factors related to personality, values, a specific situation of an organization and what it offers to employees. Talented employees should be treated individually, therefore companies establish for them individual development paths and enable them to grow in a continuous way (Ahmadi, et al., 2012). The most popular methods of developing talent include: participation in company projects, training, mentoring, coaching, job rotation, participation in international projects and internships abroad (Jarosławska, 2011).
Team work establishes the conditions for the most efficient and effective use of the potential of talented employees. Besides applying their skills and competencies, high potential employees share knowledge as well as teach their colleagues and learn from them (Calo, 2008). The process of knowledge exchange may be more efficient when employees have possibilities to inspire the development of teams. The awareness of being the font of knowledge for others, and having the opportunity to benefit from the knowledge of workmates increases employee satisfaction. In order to achieve such a situation, cooperation and the mutual enhancing of motivation are embedded as ground rules into the culture of an organization (Reed, 2001).
Nowadays the importance of talent management processes is commonly accepted and managers are aware of their roles and outcomes. Moreover, research shows that talent management has become one of the priorities for HR managers who consider attracting talented employees among their most crucial tasks (Trendy HR, 2013). If talent management is a priority for Polish companies then why so many managers declare problems and challenges in managing talent and implementing talent management processes in their companies? The study conducted by Deloitte reveals that half of Polish managers participating in the survey claim that talent management programs in their companies require substantial or radical changes (Deloitte, 2013). Hence, a question arises: what are the antecedents of talent management which determine the success of TM programs?
Intra-organizational antecedents of talent management
There is a variety of publications which mention and study some selected factors influencing talent management. Nevertheless, a comprehensive analysis of talent management antecedents has not been completed as yet.
It may be assumed that a clear company strategy oriented to talent management is the basis for planning and implementing a talent management program. Such a strategy enables a company to identify core competencies and plan succession (Tansley et al., 2007;McDaniel and D'Egidion, 2010). It is obvious that a company cannot manage talents in an efficient and effective way without being aware of its aims and objectives. A company aims to determine the level of employee skills and competencies which are required at present and may be needed in the future. Consequently, they enable companies to plan which talents are/will be necessary and how to develop the talents at the company's disposal (Sloan et al., 2003). Moreover, clearly defined aims are important from the point of view of talented employees. They help them to identify their roles in achieving company aims and their individual impacts on these aims (Turning talent into strategic assets, 2010).
Proposition 1: The company strategy determines talent management both directly and indirectly.
The architecture of the human resources management system is another important aspect determining talent management. Such an HR system should support the value and uniqueness of employee skills and competencies (Collings and Melahi, 2009). Williamson (2011) claims that the transparency of the talent management system is a key to its success and it enhances the credibility of this system. Talent management must be transparent to all employees, and the principles governing the selection of high potential employees and their privileged treatment must be understandable and clear for every organization member. Such transparency goes beyond selecting talented employees and it applies, among others, to remuneration policies, measuring work effectiveness and appointing to all positions (Garrow and Hirsh, 2008;Bryan and Joyce, 2007). A transparent and clear talent management system prevents any disputes, frustrations and subjectivity of personnel assessment. It gives employees the feeling of psychological comfort and enables them to identify their achievements (Koziński and Witkowska, 2009;Blass, 2009;Merlino, 2011). Moreover, an organization needs formal preparation to implement a talent management program to effectively manage high potential employees. The ability to develop and apply tools and techniques for identifying strategic capabilities of talented employees is another prerequisite for a talent management program. Knowing what company employees know and what skills and competencies are still needed is valuable knowledge (Mayo, 2009). Such tools and techniques facilitate the identification, recruitment and selection of talented employees. A motivation system responsive to talented employees' aspirations and their attitudes to work is another important formal aspect of a talent management infrastructure (Mayo, 2009;Ahmadi et al., 2012). Summing up, a talent management infrastructure encompasses formal solutions (processes, techniques, tools) applied by organizations within the field of talent management.
Proposition 2: A talent management infrastructure determines the talent management process.
Organizational culture belongs to those talent management antecedents which are most often mentioned in the literature (Bryan and Joyce, 2007;Ashton and Morton, 2005;Ahmadi et al., 2012;Kopeć, 2012;Tabor, 2013). Organizational culture favourable for talent management is described as a 'talent-nurturing culture'. Organizational culture should support talented employees, promote their 'difference' from others and creativity as well as reward and recognize employees of distinctive achievements. In contrast, organizational culture persecuting employees for their willingness to go beyond standard procedures and hierarchy prevents talents from emerging and blooming (Tabor, 2013). It is of paramount importance to have top executives showing the value of talent management for an organization. Without support from top management, any talent management initiatives are doomed to fail. If leaders do not believe in talent management, neither will their followers. This means that the belief in talent management and effective leadership are important prerequisites of talent management (Ellehuus, 2012;Ahmadi et al., 2012;Ashton and Morton, 2005).
Within an organizational culture supporting talent management, two aspects of company social capital are highlighted: trust and high quality relationships (Vaiman et al., 2012). Trust is perceived as a trigger and a motivation factor fostering talent management (Tansley et al., 2007). The relevant level of trust in an organization fosters achieving its aims, facilitates knowledge sharing and stimulates the employee identification with an organization and their corporate patriotism. As a consequence, the principles of a talent management program are commonly accepted, employees are eager to express their opinions, they behave frankly and team work is appreciated and developed (Sosińska, 2007).
Positive relationships establish the foundation for a talent management system. Hard working and effective employees show more willingness to engage in behaviour-building and strengthening high quality relationships which gives them an opportunity to benefit from such relationships e.g. receive the assistance of their co-workers when needed. Good relationships trigger a spiral of positive emotions and make people enjoy their work (Sekerka et al., 2012). As a consequence, employees are loyal to such organizations and show the maximum level of engagement which significantly contributes to the development of talented employees and their potential (Kaye and Jordan-Evans, 2012). Relationships are an important factor impacting on talented employees' work quality and willingness to work. Due to the fact that relationships go across the boundaries between departments, locations, fields of expertise and hierarchy, talented employees are able to find the niches and areas in an organization which are not fully exploited.
Proposition 3: Organizational culture determines talent management.
Corporate social responsibility (CSR) is a construct and an area of company activity directly related to organizational culture, trust and positive relationships. More and more often, companies implement the concept of corporate social responsibility in order to be distinguished from competitors and develop their positive image. Corporate social responsibility is a useful tool applied to attract and retain talented employees. Therefore numerous companies perceive corporate social responsibility as an important element of their HR strategies (Vaiman et al., 2012;Kim and Scullion, 2011). In the contemporary labour market, high salaries are not sufficient to attract talented employees seeking some other values highlighted by the CSR concept. Organizations taking into account the interests of society, environmental aspects and their relationships with stakeholders (employees in particular) assume a higher level of investments in human resources. Such organizations are supposed to create good working conditions, support employee development and take care of employees in need (e.g. those suffering from illness). All these interventions motivate employees and increase engagement in performing their duties (Kim and Scullion, 2013).
Proposition 4: Corporate social responsibility determines talent management.
Communication and communication skills are talent management antecedents often cited in the literature (Ulrich et al., 2012;Garrow and Hirsh, 2008). Transparent and open communication is postulated in order to foster talent management programs. The research shows that supportive communication providing employees with information about their higher status leads to better work performance and an increase in employee loyalty to their company (Fernandez-Araoz et al., 2012). Companies should also pay attention to the way they are communicating with talented employees (transparency of messages, programs and objectives) (Koziński and Wiskowska, 2009;Ahmadi et al., 2012). Employers and managers should focus their attention on building effective communication systems both within an organization and outside it due to the fact that it enhances the speed of information exchange between employees as well as between employees and managers. Moreover, effective communication systems are required for team work, in particular when team members cooperate from distant locations. Last but not least, effective communication fosters organizational culture which values knowledge as a common valuable asset (Zydel, 2010;Piskorski et al., 2010).
Proposition 5: An effective communication system determines talent management.
Creating development opportunities for talented employees is another antecedent worth mentioning and analyzing. Talented employees need stimulants, new challenges and ambitious aims to be achieved (Pilbeam and Corbridge, 2010;Kaye and Jordan-Evans, 2012). From the point of view of high potential employees, the prospects for professional development are important motivations for joining an organization and being loyal to it. Talented employees need challenges, possibilities to participate in projects, decision-making processes and connecting their tasks with a business strategy (Talent management is Bupharm's prescription for success, 2011; Cunningham, 2007;Ulrich et al., 2012;Garrow and Hirsh, 2008). Due to such elements, talented employees feel the importance of their roles in an organization and their possibilities to have an influence on the company's future through their engagement, knowledge and skills. A strategic approach to the development of employees is manifested, among others, through career paths which establish plans for talents climbing the ladder of the organizational hierarchy. Career paths are considered to be both a manifestation and an antecedent of talent management.
Proposition 6: Creating development opportunities for talented employees determines talent management.
Organizational structure is another talent management antecedent enumerated in the literature (Ashton and Morton, 2005;Tansley et al., 2007). The aspects of organizational structure which are important for managing talents include: the level of centralization (the lower centralization the better) and a hierarchy (balance between flattening an organizational structure and establishing conditions for career paths and promotions) (Garrow and Hirsh, 2008;Blass, 2009). Bryan, et al. (2006) argue that tall organizational structures impede identifying talents, searching the opportunities for their development and creating organizational knowledge. An organizational structure should facilitate team work and create favourable conditions for establishing flexible project teams. This is of paramount importance for acquiring and sharing knowledge which leads to improvements and innovations. Talented employees free from the pressure of bureaucratic procedures are more eager to show creativity in their workplaces and improve the work of themselves and other personnel.
Proposition 7: Organizational structure determines talent management.
Middle-level managers, their knowledge and competencies seem to play a significant role in talent management programs. They contribute to talent management processes through identifying high potential employees, supporting their development and creating career paths (Tansley et al., 2007). Middle-level managers motivate talented subordinates to seek the possibilities for professional development and make above-the-standard efforts (Bryan and Joyce, 2007). On the other hand, a toxic middle-manager may be the first-line 'killer' of talented employees.
Method of research
The data presented in this paper come from the research project entitled "Strategic management of the key areas of Positive Organizational Potential (POP) -conditions, approaches and models recommended for companies operating in Poland". The project was funded by the National Science Centre grant no. DEC-2011/01/B/HS4/00835. The project was based on four complementary data and information elicitation methods, namely a questionnaire survey, interviews, a Delphi session and a classic Delphi (by correspondence). This paper concentrates on the results of the questionnaire survey. The data were completed in 2012. The sample comprising 73 companies was selected from organizations operating in Poland recognized as the leaders in their industries (or at least as top ranking companies).
The respondents were asked to assess the accomplishments of the companies they managed in nine areas, based on the eleven grade scale <0%, 10%, 20%, …, 100%> where 0% refers to the situation when the ideal feature definitely does not characterize an assessed area, 100% refers to the situation when the ideal feature definitely characterizes an assessed area. These areas are as follows: talent management (TM), strategy (STRAT), organizational culture (CULT), middle level management (MLM), development opportunities In order to test the reliability of the questionnaire, Cronbach's alpha coefficients were calculated for nine variables corresponding with the above-mentioned areas included in the questionnaire. Cronbach's alpha coefficients ranged from .82 to .97 which confirms the high level of the questionnaire reliability.
As the dominant scale used in the questionnaire was a percentage scale which is a ratio scale, the Pearson correlation and PLS path modelling together with appropriate statistical tests were applied to analyse the causal relations between variables. IBM SPSS Statistics and SmartPLS (Ringle, Wende, and Will, 2005) software were used for statistical analyses. The majority of the estimates were made on the basis of the generalization of the subjective assessment of the managers. In this way it was possible to identify the determinants of talent management in companies operating in Poland. The enumerated variables constitute the foundation for the comprehensive model of talent management antecedents presented in Figure 1.
Research analysis
The analysis of correlations and regressions was applied to validate the propositions included into the model. First of all, in order to confirm the relationships among the variables, the analysis of correlations was conducted (Table 1). The analysis shows that all the identified antecedents strongly correlate with talent management, which confirms the important relationships between the variables. The top correlates are: talent management infrastructure (r = .781), organizational culture (r = .760) and internal communication (r = .756). Simultaneously, it should be highlighted that strong correlations are observed between all the variables included in the model.
In the case of the correlation coefficients referring to the pair of variables, it cannot be stated which of the two is the cause or the effect. One can only refer to the existence of correlation between them. That is why the PLS path modelling was carried out to test the proposed model of talent management antecedents and show the cause-effect relations between the variables. The main idea of the PLS regression analysis is data prediction, forecasting for a given variable on the basis of other variables. Figure 2 shows the model estimation results. The path coefficients allow the assessment of the impact of predictor (independent) constructs on endogenous (dependent) constructs, represented by rectangles with arrow heads. The higher their value, the stronger their impact, as the path coefficients represent the estimated change in the endogenous variable for a unit change in a predictor variable. According to the data, the increase in the STRAT construct will have a strong positive impact on all dependent constructs excluding TM, in which case the direct impact will be positive but rather weak. However the total (direct and indirect -through the other variables) effect of the increase in STRAT on TM will be positive and strong as in this case ß = .647. This means that the improvement in the quality of strategy in companies as regards taking employees' opinions into consideration when creating it as well as genuine communicating of goals and a range of their achievement to employees will result in the improvement of the talent management quality. This can result in the inclusion of talent management in the strategy as well as in more advanced talent management practices, e.g. individual career paths applied for talented employees or advanced methods of attracting the best employees available in the labour market. Furthermore, the high total effect of STRAT on TM is caused in particular by the high value of path coefficients between the STRAT and two other variables CULT (ß = .670) and TM INFRA (ß = .696) which in turn have a big impact on TM. This means that effective talent management is determined by cultural issues like a high level of trust and positive interpersonal relationships present in a company and a highly developed talent management infrastructure including high quality HR policies and procedures respecting employee interests. Both factors influencing effective talent management strongly depend on the strategy quality.
The relationships between the STRAT and the other variables observed on the basis of path coefficients are not surprising, since the strategy and individual policies included in it determine all the areas of a company's functioning. However, the results regarding the relationships between TM and its determinants are more surprising.
As was mentioned above, the path coefficients shown in Figure 2 indicate that the strongest positive impact on the TM variable will be caused by the increase in the TM INFRA and CULT variables (strong direct impact), next in the STRAT variable (strong total effect), and then in the COM variable (weak direct impact, ß = .151). The latter variable concerns organizational communication. It appears that the improvement in organizational communication as regards comprehensiveness and clarity of information and values communicated in a company will result in the improvement of the talent management quality, however this impact will be weak. In the case of the STRUCT and CSR variables reflecting transparent and teamwork-oriented organizational structure and corporate social responsibility initiatives maximizing the value of strategic stakeholders of a company, no impact on the TM variable was confirmed (ß = .007 and ß = .052 respectively).
Finally, it can be seen in Figure 2 that two path regression coefficients from the MLM construct to the TM construct and from the DEV OPP construct to the TM construct take negative values (ß = -.126 and ß = -.392 respectively), which means that the improvement in the quality of middle-level management as well as in the development opportunities created for talented employees will result in the decrease in talent management sophistication. These are the most surprising results, however they may suggest that the direction of the considered relationships should be reversed, especially when it comes to the latter one, namely DEV OPP-TM.
Discussion
The data presented and analysed above enable the authors to test the propositions formulated as a result of literature studies. As a result of the research, the propositions may be divided in three categories. In case of the first category including organizational strategy, talent management infrastructure, organizational culture and internal communication, the research confirms their influence on talent management. The second category of variables (corporate social responsibility and organizational structure) shows minimal (almost unnoticeable) cause-effect relationship with talent manage-ment. The most surprising results are achieved in regard to development opportunities for talented employees and middle-level management which represent negative values of the regressions coefficients.
Proposition 1 was validated. Clear strategy determines talent management and directly influences other talent management antecedents identified in the model. Similarly, Proposition 2 was confirmed by the empirical research. In order to manage talents in an efficient and effective way, organizations need tools, methods and clear procedures. Without an appropriate infrastructure, hardly any stage in the talent management process can be effectively completed. Undisputedly, talent management is determined by the organizational culture which confirms Proposition 3. Talents should be considered as something valuable by all the members of an organization. This finding was confirmed in interviews conducted in Polish companies. As noted by one of the HR managers: "an organization must be ready for talent management programmes with its values, attitudes, behaviour, but first and foremost with a high level of trust and intra-organizational relationships". The findings (ß = .151) prove that intra-organizational communication has some moderate influence on talent management which enables us to validate Proposition 5. Without unambiguous communication and feedback to high potential employees, it may be impossible to effectively implement talent management policies.
As regards corporate social responsibility, there are no findings to confirm its impact on talent management. A low value of the regression coefficient (ß = .052) demonstrates that its influence is very weak (almost unnoticeable). While observing business reality, this finding seems to be logical. Corporate social responsibility may be supportive for talent management but it does not determine talent management processes. Corporate social responsibility contributes to building a positive image of an organization which may attract new talents and establish a better working environment for employees. This means that Proposition 4 was disproved. Similarly to corporate social responsibility, organizational structures may support talent management programs. However, it is not possible to point out what type of organizational structure is optimal from the perspective of talent management, as it depends on the size of the company and its characteristics. Therefore, Proposition 7 could not be validated as the data of the survey do not confirm the impact of the organizational structure on talent management.
Proposition 6 was disproved, which is surprising. Almost every publication related to talent management highlights the role of creating development opportunities for talented employees as the basis for attracting and maintaining high potential employees. However, the development opportunities for talented employees (such as rewarding innovative employee behaviour, sophisticated training system, entitlements delegation in order to empower lower level employees etc.) can be created before a formalized, systemic approach to talent management is established in a company and inversely they can be a result of effective talent management. Similarly, Proposition 8 was refuted. In the opinion of the respondents, middle-level managers are not perceived as a force significantly influencing talent management. The authors admit that this was surprising for them while comparing the empirical findings with the outcomes of literature studies and even common sense. Middle-and first-line managers seem to be the first ones who can identify talented employees, shape their career paths and assign them to the tasks where their talents can flourish. In the opinion of the authors, two explanations for these findings are possible. The first interpretation could be that the more competent and inspiring for their people the middle managers, the smaller the need to develop in a company a talent management system as the middle managers play their coaching role for talents very well. The second explanation for the observed lack of the cause-effect relationship between middle managers and talent management programs relates to the influence of Polish culture on the organizational cultures of the companies under study. In Polish culture, standing out from the crowd and being a "star" is not commonly accepted (cf. Skuza et al., 2013, p. 461). Therefore, if middle managers represent such values and they are afraid that high potential employees can outperform them, any talent management initiatives will be hampered. Moreover, such managers do not contribute to talent management processes, which was observed by the respondents in the studied companies. It should be highlighted that such attitudes of managers ought to be altered.
The regression coefficients in this study indicate the reverse direction of the relationship between the development opportunities and talent management (ß = -.392), as well as middle level management and talent management (ß = -.126), which may suggest that in the aforementioned pairs of variables talent management is an antecedent, while the variables representing development opportunities and middle management should be considered as the outcomes of effective talent management programs. Thus in order to test such an assumption the model was changed and recalculated ( Figure 3). This time the relationship between the talent management variable and the development opportunities variable, presented in Figure 3, is opposite and can be described as follows: the increase in the talent management construct will have a strong positive impact on the development opportunities construct. In such a case it is possible to indicate effective talent management consequences which can occur in the examined companies. These are the above-mentioned rewards for innovative employee behaviour, sophisticated training system and entitlements delegation in order to empower lower level employees, but also ambitious goals set for employees, high level of employee autonomy and responsibility and a result-dependent salary system. The interpretation of a path regression coefficient in the case of a reverse talent management -middle-level management relationship is as follows: the increase in the talent management construct will have a strong positive impact on the middle-level management construct, which means that the improvement in the talent management system effectiveness will result in the improvement of the efficiency of middle level management. In this case, taking into account the whole model indicated in Figure 3, the quality of middle-level management is determined by well-thought-out and well communicated strategy and effective talent management system, which for its effectiveness needs to be created and managed from the strategic, rather than the middle-level perspective.
As a consequence of the changes in the model, high values of path coefficients were observed (TM-DEV OPP ß = .688; TM-MLM ß = .731) which confirms the assumption. There is some logic in the observed relationship between managing talents and creating for them development opportunities and middle level management. In order to enable employees their professional development adjusted to their strengths and talents, organizations should to be able to identify and select such high potential employees as well as create individual career paths which require effective talent management solutions.
The aforementioned findings seem to be interesting but they need further exploration. At the same time it should be mentioned that, as a side effect of the changes introduced in the model, the impact factor of such antecedents as organizational culture, talent management infrastructure and organizational strategy decreased.
CONCLUSION
The study has empirically tested the influence of talent management antecedents identified through literature surveys. The analysis encompassed eight categories of factors enumerated among the determinants of talent management programs: organizational strategy, organizational culture, middle-level management, development opportunities for talented employees, internal communication, corporate social responsibility, talent management infrastructure and organizational structure.
The analysis based on the data of the questionnaire survey conducted in companies operating in Poland positively validated the cause-effect relationships between talent management and the following antecedents: talent management infrastructure and organizational culture (strong direct impact), organization strategy (strong total effect, mainly triggered by the indirect impact), and internal communication (weak direct impact). The study refuted the propositions on the role of organizational structure and corporate social responsibility in fostering talent management programs. As regards creating development opportunities for talented employees and middle-level management influence on talent management, ambiguous results were achieved. In both cases, the regression analysis showed a negative causeeffect between these factors claimed to be antecedents and talent management. In order to investigate these ambiguities thoroughly, the direction of the causeeffect relationships between the variables in the model was changed. As a result, the analysis confirmed that the increase in the talent management construct will have a strong positive impact on the development opportunities and middle-level management constructs.
The outcomes of the study contribute to the field through the empirical testing of theoretical assumptions concerning the antecedents of talent management. Nevertheless, it should be stressed that the study was limited to companies operating in Poland which established a specific cultural context. A relatively small research sample was another constraint. Therefore, the findings cannot be automatically extended to other organizations. The complexity of organizations results in difficulties to unambiguously distinguish between causes and effects. Therefore, this study should be considered as the first attempt to empirically validate the antecedents of talent management and should be followed by further quantitative research conducted in the international context and based on larger samples. Moreover, in order to reduce the observed ambiguity between causes and effects, quantitative studies should be supported by qualitative surveys based on the case study methodology. Condition of interpersonal relationships in employee teams translates into relationships among the teams in a company n. Working teams are characterized by a high level of cohesion o. Employees communicate in an open and sincere way and they share information on the mistakes they have made without being afraid of negative and unjust consequences p. Employees are committed to their jobs, even when a company faces difficulties (crisis periods) q. Employees willingly share knowledge r. Employees play fair even when they compete with each other s. There is the climate of friendliness within a company t. The superiors are not anxious to delegate their responsibilities and powers u. Employees do not resist managerial decisions v. Self-control is applied wherever possible 7. Organizational structure (STRUCT)
APPENDIX
a. An optimum formalization is in place combining both precise and clear procedures (when needed) and informal activities b. Organizational structures are transparent c. The responsibilities of employees are clear and complete d. A company emphasizes teamwork e. The members of project teams can be freely identified and nominated f. Formal procedures and rules do not limit creativity 1 2 8. Middle-level management a. Managers perform leadership roles in their teams b. Managers coordinate their teams and foster relations c. Managers perform coaching roles in their teams d. Managers capture and disseminate information on business goals and objectives e. Managers initiate changes in a company f. Through their behaviour, managers set a good example of positive relations within a team and outside it g. Managers are oriented to self-development and increasing their skills and competencies h. Recruitment criteria for managerial positions include necessary knowledge and skills (resulting from the work position) i. Recruitment criteria for managerial positions include social competencies (appearance, establishing relations, communication skills, teamwork) j. Recruitment criteria for managerial positions include emotional competencies (empathy, self-consciousness, self-control, self-motivation) k. Recruitment criteria for managerial positions include individual effectiveness (ability to work in stress, concentration) l. The middle management provides a positive model of relations with employees 9. Corporate social responsibility a. A company has established HRM policies taking into account the outcomes of surveys among employees (monitoring the employee satisfaction, the development of their careers, work conditions, leaves, safety and remuneration) b. A company has established fair and transparent rules applied to its relations with employees and other stakeholders -when running business the company takes into account the interest of society c. A company has developed and introduced the OH&S (Organizational Health and Safety) procedures going beyond the obligatory legal regulations d. A company contributes to the development of its local community (cooperation with local business, job creation, education) e. A company systematically supports the underprivileged (it contributes to the improvement of their living conditions) f. A company has established the aims of reducing its negative impact on the natural environment (i.e. the average energy or water consumption) g. Corporate social responsibility issues have been included into a strategy h. The criteria for contracting suppliers are not limited to an economic dimension i. The responsibility for planning and coordinating CSR policy is formally established (i.e. a position or a department responsible for CSR implementation, procedures and regulations) Variables of the talent management antecedents model (cont.)
|
2020-11-26T09:07:11.187Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "00758983657bc037d28036fbc8d9afbae222e170",
"oa_license": "CCBYSA",
"oa_url": "http://dbc.wroc.pl/Content/83966/Chodorek_Haffer_Lis_Intra-organizational_antecedents_of_talent_management_in_the_context.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5120617bae93ebab6167a08574eebd45f49168a7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
226964472
|
pes2o/s2orc
|
v3-fos-license
|
Sparsity-Inducing Optimal Control via Differential Dynamic Programming
Optimal control is a popular approach to synthesize highly dynamic motion. Commonly, $L_2$ regularization is used on the control inputs in order to minimize energy used and to ensure smoothness of the control inputs. However, for some systems, such as satellites, the control needs to be applied in sparse bursts due to how the propulsion system operates. In this paper, we study approaches to induce sparsity in optimal control solutions -- namely via smooth $L_1$ and Huber regularization penalties. We apply these loss terms to state-of-the-art DDP-based solvers to create a family of sparsity-inducing optimal control methods. We analyze and compare the effect of the different losses on inducing sparsity, their numerical conditioning, their impact on convergence, and discuss hyperparameter settings. We demonstrate our method in simulation and hardware experiments on canonical dynamics systems, control of satellites, and the NASA Valkyrie humanoid robot. We provide an implementation of our method and all examples for reproducibility on GitHub.
I. INTRODUCTION
The propulsion systems of orbital satellites have a unique control limitation. In many cases, they use impulsive cold gas or bi-propellant thrusters incapable of or ineffective at low rates of firing. The resulting control then relies on fewer longer bursts of thrust to generate a constant amount of force over time while keeping the thrusters off between the bursts.
The required control inputs are then a sequence of binary on/off commands that are activated sparsely throughout the motion [1]. We call this type of control sparsity-inducing, referring to the sparse use of control inputs throughout the trajectory. Such control will prefer zero control (off) followed by a high control action (on) to continuous corrective commands applied throughout the whole trajectory.
Selecting the required degrees of freedoms (DoFs) of a redundant system such as a humanoid robot is another application of sparsity-inducing control. Instead of deactivating the control inputs, the planner deactivates unnecessary joints. Consider a reaching task for the 38-DoF Valkyrie humanoid robot, where the goal is to extend the hand (endeffector) forward to point at a target. Solving this via motion planning involves finding suitable control inputs for the entire humanoid such that it balances itself and extends the hand. With traditional control-penalty methods, the planner readily discovers a motion where all the joints are simultaneously moving (cf. Fig. 10). Such a motion is arguably unnecessary and in fact undesirable as it requires more complex control to coordinate the motion. This can result in trajectories that are more difficult to execute and track by the low-level controller and may require more energy.
To achieve sparsity in the controls as well as automatic joint selection, we propose to use a sparsity-inducing penalty term in the control cost. This serves to both switch off unnecessarily small control inputs in the case of high-DoF robots and to discover thruster-like behavior for satellites.
A. Related Work
Optimal control methods synthesize dynamically consistent motion satisfying a set of task and dynamics constraints while minimizing an optimality criterion. Shooting methods, which optimize over the control inputs, and in particular algorithms derived from Differential Dynamic Programming (DDP) [2], have recently received renewed interest [3]- [5]. In contrast to simultaneous transcription methods [6], [7], shooting methods have faster computation times by explicitly exploiting the temporal structure and implicitly enforcing the dynamic feasibility of the solution.
A common optimality criterion is energy optimization, which is traditionally a squared cost on the control inputs (L 2 norm regularization). In practice, this leads to smooth control profiles and has been widely applied to canonical dynamic systems, computation of flight trajectories, as well as to synthesize highly dynamic maneuvers for legged robots [4], [5]. However, as no sparsity is introduced, on redundant systems this frequently leads to moving many joints even if not all joints are required to complete the task (cf. Fig. 10).
Sparsity in control inputs for planning was studied in the context of satellite motion planning [8]. There the authors used the L 1 norm, also known as Lasso model. Similarly, the authors in [9] applied an L 1 penalty using an Alternating Direction Method of Multipliers (ADMM) approach separating the problem into an optimal control update and a soft thresholding update. Whereas previous work considered an L 1 penalty applied to the force at the center of mass of the satellite, here we model the thruster behavior directly. Finally, the authors in [1] obtained thruster controls by optimizing the timing of thruster pulses, which are modeled as on-off controls. Here we use smooth L 1 costs to penalize the otherwise continuous thruster forces. As a result, our approach is directly applicable in standard optimal control frameworks.
Recently, the concept of sparsity has gained additional attention in the trajectory optimization and control community for terrestrial/traditional robotics applications such as manipulation. [10], for instance, investigated the use of Mixed-Integer and Lasso regression to reduce joint motion on humanoid robots in a hierarchical inverse dynamics control scheme. Nonetheless, enforcing sparsity for planning over longer horizons continues to be a challenge.
It is well known in the machine learning community that L 1 introduces a discontinuity at 0 that makes it nondifferentiable [11]. Several differentiable metrics have been proposed to deal with this issue [9], [12]. In this paper, we study two such costs when applied to DDP: the SmoothL1 [13] and Huber differentiable approximations [12] to L 1 . Using such sparsity-inducing cost terms with DDP raises two challenges requiring careful consideration: i) ensuring numerical stability/conditioning as efficient implementations assume positive-definiteness of the control Hessian, and ii) trading off achievements of desired tasks with sparsity of solutions through control regularization. Note that these challenges do not arise when using direct transcription/collocation where tasks are enforced with hard constraints and solved using off-the-shelf Non-linear Programming (NLP) solvers.
B. Contributions
We study approaches to induce sparsity in optimal control solutions and make the following contributions: 1) Introduce sparsity-inducing regularization terms in DDPtype solvers. 2) Compare different strategies for sparsity-inducing regularization, namely SmoothL1, Huber, and Pseudo-Huber loss, in terms of their convergence and numerical stability. 3) Demonstrate our approach for the swing-up of a cartpole and for satellite thruster control. Additionally, we demonstrate hardware manipulation experiments using the Valkyrie humanoid robot. We provide our implementation and evaluations as open source software for reproduction. 1 A supplementary video is available at https://youtu.be/YMXRZjFsqhc.
II. CONTROL REGULARIZATION
We begin by reviewing the use of L 2 penalties in the optimal control literature. The L 2 loss is defined as: The L 2 loss is used to regularize solutions by penalizing large positive or negative control inputs in the optimal control setting or features in machine learning.
In contrast, the L 1 loss is used to penalize solutions for sparsity, and as such, it is commonly used for feature selection in the machine learning community [14]. The L 1 loss is defined as the absolute value of its argument: When L 1 is used with gradient-based optimizers as a regularization, it drives its argument to exactly 0 as opposed to small values. This is explained by the condition that the gradients of the regularization parameter and the task cost must be parallel.
Using the L 1 loss directly in gradient-based optimization is difficult due to the discontinuity at x = 0 where the gradient is undefined. Smooth approximations to the L 1 function can be used in place of the true L 1 penalty. In this paper, we consider a smooth L 1 function from [13], which combines L 1 and L 2 losses, defined as: where β defines where the function switches from an L 1 to an L 2 cost. For small x ≤ β, SmoothL1 switches to L 2 , since L 2 has a gradient at 0. We study another variant of combining L 1 and L 2 regularization, namely the Huber loss [12]: where β is again a shape parameter. The Huber loss has a variable slope, controlled by β in addition to mixing L 1 and L 2 . This can be seen in Fig. 2, where for β = 0.5 the Huber cost has a lower slope than SmoothL1. Finally, we consider a smooth approximation to the Huber loss, the Pseudo-Huber loss, as defined in [15,Appendix 6]: We illustrate the considered losses for different settings of their hyper-parameters in Fig. 2. The shape parameters of the smooth variants control how closely they approximate the true L 1 and Huber losses, respectively. The choice of the control regularization and its parametrization has an impact on convergence and sparsity of the output of the optimal control formulation.
III. OPTIMAL CONTROL
We consider the robot as a dynamic system described by state x composed of generalized coordinates q and generalized velocities v. The system evolves under applied control inputs u according to the state transition function x t+1 = f (x t , u t ) which incorporates the differential dynamics as well as an integration scheme. Here, we use a geometric representation of the configuration manifold of floating-base systems (SE (3)) with its geometric integrators along with an energy-conserving symplectic integration scheme of the differential dynamics.
To describe a discrete optimal control problem with a fixed horizon, we additionally specify the integration time step ∆t and time horizon T and the number of discretization knots N . This yields a state trajectory X = {x 1 , . . . , x N } and control trajectory U = {u 1 , . . . , u N −1 }. Tasks and constraints are enforced by minimizing a cost function: Shooting methods in particular minimize J(·) with respect to control inputs only: where U * is the optimal open-loop control trajectory. The corresponding state trajectory is obtained by performing a forward roll-out using the state transition function.
A. Differential Dynamic Programming
Differential Dynamic Programming (DDP) [2], [16] is a classical method to solve the above unconstrained optimal control problem using Bellman's principle of optimality. DDP begins by making a quadratic approximation of the action-value function Q around a reference trajectory where the value function computes the "goodness" of state x, the Q-function gives the same quantity for a state and an action. DDP minimizes the second-order Taylor expansion of the Q-function: where δx t = x t − x (0) t and δu t = u t − u (0) t and the subscript notation is shorthand for the partial derivative of Q evaluated at the reference trajectory point for t ∈ [1, N ]. In the following, we drop the subscripts to denote way points for readability. We then give the derivatives: + 1). The last terms are shorthand for tensor products.
DDP minimizes the quadratic approximation with respect to the new coordinate system. Hence we obtain the local feedback control law: with the feed-forward modification k = Q −1 uu Q u and state feed-back term K = Q −1 uu Q ux . Since δu * t is the minimum of the Q-function, DDP obtains a recursive set of equations for the value function at every time-step: uu Q ux In order to evaluate these equations, Q and its derivatives are evaluated at the reference trajectory state x (0) t and the optimal control δu * t as calculated above. This is performed in a backward pass from knot t = N to t = 1. The backward pass is followed by a forward pass in order to obtain the new state sequenceX = {x 1 , . . . ,x N } and controlsÛ = {û 1 , . . . ,û N −1 }: IV. ENFORCING SPARSITY WITH L 1 AND HUBER COSTS DDP minimizes a general cost function J(X, U ) of the form in (4). In this work, we propose adding an additional cost term for each control input u t that induces sparsity. The new cost function thus becomes: where l s (u t ) is one of the sparsity-inducing losses described in section II and λ is a strength parameter, which controls the relative effects of the regularization loss and the objective loss.
For the cartpole and satellite examples, we use quadratic state costs of the following form: x N , respectively. For the Valkyrie example, we demonstrate the use of nonlinear task cost functions such as end-effector position, stability cost, and joint limits from [17]. The parameters λ together with β are hyper-parameters and their values define the interaction between the sparsity loss and the optimization criterion (task). We study their effect on the convergence of the solution in detail using the canonical cartpole in the following section.
V. EFFECTS OF SPARSITY LOSS ON TOY PROBLEMS
We firstly study the effects of sparsity-inducing costs on a one-dimensional problem-the swing-up of a cartpole, which is a canonical optimal control problem where a pendulum is attached to a cart moving on an infinite friction-less track. The goal is to swing the pendulum upright and move the cart to the origin. The problem is underactuated-the control inputs are linear forces on the cart, whereas the pendulum joint is not controlled. The time horizon is T = 200 with ∆t = 0.01s, resulting in a 2 s trajectory. The control limits are ±30N and Q f = 100 I 4 , where I 4 is the 4 × 4 identity matrix.
A. Effects of weight and shape parameters on sparsity
Firstly, we examined the effect of the weighting term λ and shape parameter β on sparsity. β for all functions controls where the switching between an L 2 cost (for small x < β) and an L 1 cost (for x ≥ β) occurs. We consider controls to be zero when they are within [−β, β].
The results of a grid search over β and λ are in Fig. 3a for SmoothL1, Fig. 3b for Huber, and Fig. 3c for PseudoHuber. We plotted both the number of zeros and the final task costthe latter tells us how close we are to the goal state.
A desired property of sparsity costs is that sparsity should increase with the weight term λ. As expected, we see a tradeoff between sparsity and task cost-the more regularization, the more sparsity, however at a higher task cost. This is in fact what the grid search shows. Another important property we observe is that for lower tolerances β, a much higher weight is required to achieve sparsity. Thus we can extract a criterion for choosing sparsity-1) pick the largest β parameter according to how much noise tolerance the system has and 2) adjust the control cost weight until the desired amount of sparsity is achieved. The noise tolerance of the system is the maximum value of controls beyond which rapid fluctuations can not be tolerated.
In the solutions for β = 10 −3 in Fig. 4(left), all solutions achieve a final task cost of less than 10 −5 with 73, 58 and 86 zero controls for SmoothL1, Huber and PseudoHuber, respectively, yet we observe artifacts showing rapid control changes. Increasing the weight (from 7, 9, and 0.01 to 25, 25, and 0.023 for Huber, PseudoHuber and SmoothL1, respectively) can reduce these artifacts, while also increasing sparsity. In Fig. 4(right) the solutions have 101, 102 and 89 zero controls and produce smoother control profiles. However, this comes with an increase of final state cost from less than 10 −5 to less than 10 −4 .
In general, sparsity-inducing costs together with control/actuator limits produce so-called "bang-bang" control. On a real system, aggressive bang-bang control requires the actuator to change the output torque rapidly at each time-step. This is usually not possible due to actuator dynamics, for example, when using electric motors on the cart pole in our example. However, in some domains, namely satellite control, the underlying physical system and actuators are only capable of bang-bang control and this in fact is a desirable property.
VI. THRUSTER CONTROL FOR SATELLITES
We now consider a satellite as a floating rigid body in The resulting control (thrust) and state trajectories are shown in Fig. 5. The different colors correspond to different thrusters being activated at the corresponding times. We see thruster peaks at the start and end of each of the stages and reach the corresponding set output forces. The right side of the figure shows the position and linear velocity trajectories in state space.
A. Effects of weight parameters
We next examine the effects of the different weights on the satellite problem. In Fig. 6 we plotted the control trajectories for different weights-λ = 10 −5 and λ = 10 −1 . The plot for the correctly tuned λ = 10 −3 is in Fig. 5. We see a similar trend as in the cart-pole system-if the control weight is too small, we observe artifacts in solution space. Tuning the weight produces sparse solutions with no artifacts and the desired bang-bang control profile. Finally, we observe a different result when increasing the weight on the satellite example-this leads to longer thruster bursts on the thrusters with smaller limit, which in fact leads to less sparsity-23649 zero controls compared to 24509 when tuned. This can be explained by the reduction of peaks at 200 N, which are penalized more, which in turn leads to the solver compensating by switching on the 50 N thrusters.
B. Effects on convergence
Finally, we examine the effects on convergence for L 1 costs. A plot of the time to convergence for all considered cost terms is shown in Fig. 7. We computed this over a grid of weights λ ∈ [10 −5 , 10 −1 ] for β = 1. Generally, time to convergence is increased for all sparse costs with Huber being slowest and Pseudo-Huber fastest to converge on average. This is expected as sparsity-inducing cost terms have less steep gradients further away from zero, where the gradient for an L2 loss would be larger. We next applied the sparse costs to a reaching task on a 38-DoF humanoid robot. We use acceleration-based linear system dynamics and nonlinear general task costs. In this case, the three-dimensional reaching task requires only two joints to move (the shoulder and the hip). This is illustrated in Fig. 10. The goal is to reach to x * = [0.5, 0.2, 0.9], a point directly in front of the robot at hip-height. We discretized into T = 20 knots with ∆ = 0.1 s for a 2 s trajectory.
The resulting state and control plots for L 2 are in Fig. 8. Solving the problem with L 2 control regularization produces a solution that moves more joints than necessary. This is easily seen in the corresponding state plots (Fig. 8 and 9), showing the positions and velocities of the joints that move.
Applying a sparsity cost (Pseudo-Huber) to the problem leads to the solver choosing to move only the required joints. The resulting trajectories in Fig. 9 show this clearly. However, due to the approximation of the L 1 costs, numerically the robot is not moving 3 joints, as is apparent, but rather 7 have velocities greater than 10 −3 . Compared with L 2 , which moves 26 joints, this is nonetheless a significant reduction.
Finally, we executed the motion plans on the physical robot. The trajectories are tracked using an inverse dynamics based whole-body controller. We compared a solution with an L 2 cost and a Huber cost. We plot the tracking results (distance of end-effector to target) in Fig. 11. For the Huber sparse trajectory, tracking is better both during and at the end of the trajectory. Fig. 11: Tracking results for the Valkyrie reaching task. The red line highlights the end of the commanded trajectory. We highlight the task-space error at the end of the trajectory (T = 2s) and the end of the experiment. This illustrates that the whole-body controller adds a delay to the motion execution, and that the tracking error is lower for the sparsity-induced trajectory.
VIII. DISCUSSION
We studied the effects of using an L 1 cost for the control of dynamic systems using optimal control. Since L 1 is not continuously differentiable, we studied three approximations: the SmoothL1, Huber, and PseudoHuber losses.
On a simple cartpole problem, L 1 costs lead to sparsity in control space by making a subset of the controls 0 and producing peaks that resemble square waves. We analyzed the performance of L 1 costs over a grid of values for the shape parameter β, which thresholds switching between L 1 and L 2 , and the control cost weight λ. For smaller values of β a much larger control weight is required to achieve sparsity. Larger control weights, however, lead to higher task costs. We thus propose picking the largest value of β according to the system's noise tolerance and then fitting the weight λ until the desired level of sparsity is achieved. We further motivate this approach by showing that relying on sparsity and final task cost alone can lead to non-smooth control trajectories with visible artifacts.
Scaling L 1 costs to real-world robots presents new challenges. We successfully applied L 1 to a kino-dynamics optimal control problem on the Valkyrie robot to select a subset of active joints for a low-dimensional reaching task. While L 2 losses use more joints than necessary, L 1 can automatically reduce the number of joints by setting the corresponding controls to 0. Sparse controls can be in practice tracked better. However, sparse controls also result in bang-bang control with higher commanded accelerations that can damage the hardware.
On the other hand, this is a desired control profile for thruster control for satellites. We were able to achieve thrusterlike behavior with L 1 costs in order to track a multi-stage center of mass trajectory. We further analyzed the timing performance of L 1 costs, showing that there is an increase in convergence time when using L 1 costs with Huber being the slowest, followed by SmoothL1 and Pseudo-Huber. Finally, we showed that the weight parameter λ has a similar effect for satellite control as it does on the cartpole-low values of λ lead to artifacts and high values of λ lead to high task costs. In the satellite problem, however, high λ did not numerically lead to more sparsity, as it produced longer thruster peaks for the lower control-limit thrusters, instead penalizing the high-limit thrusters more.
Finally, we note that it is important to enforce control limits when using sparsity-inducing losses. For unconstrained methods (DDP [2], FDDP [5]) clamping of applied controls in the forward-pass works in practice but results in slower convergence and often leads to getting stuck in local minima. We tested the losses with active-set control-limited DDP [18], and in particular BoxFDDP [19] in our experiments due to its greater generalization without an initial guess.
Future directions for this research are in time optimization of trajectories using DDP and to address the convergence and control artifacts using regularization strategies.
|
2020-11-17T02:01:06.123Z
|
2020-11-14T00:00:00.000
|
{
"year": 2020,
"sha1": "43da38c323b3378712187f42e29bf3d05cdf790e",
"oa_license": null,
"oa_url": "https://www.pure.ed.ac.uk/ws/files/201438723/Sparsity_Inducing_Optimal_DINEV_DOA28022021_AFV.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "43da38c323b3378712187f42e29bf3d05cdf790e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
224911792
|
pes2o/s2orc
|
v3-fos-license
|
Native vs. Nonnative Raters in Second Language Pronunciation Assessment of Guttural Sounds
This short paper examines inter-rater reliability of native vs. nonnative raters in their assessment of L2 Arabic speech by American learners. It is predicted that ratings provided by native speakers of Arabic would be more consistent and show less variance as opposed to ratings provided by nonnative speakers of Arabic. In a rating experiment, native and nonnative raters evaluated the “nativeness” of American learners’ production of Arabic guttural consonants. A Pearson’s correlation coefficient shows a significant strong inter-rater reliability in the judgments of native raters, and a poor one, although insignifi-cant, in the judgments provided by the nonnative raters. Findings also indicate that overall native and nonnative rater groups produced comparable ratings, although no strong correlation could be established.
Introduction
Over the years, considerable attention has been given to teaching Arabic as a foreign language in many schools and universities worldwide with examinations and interviews as common means of assessing L2 Arabic conversational abilities.
However, these oral interviews and exams depend a great deal on the subjectivity of the raters. Even though a clearly defined scoring rubric is often provided to raters to assist in better assessing the test-takers' linguistic abilities with minimum bias, still studies have shown that inconsistencies in raters' judgments can vary substantially and are quite unsettling as they ultimately undermine the usefulness of such subjectively-scored tests.
This short paper endeavors to explore the effect of native vs. nonnative raters as a factor in the assessment of L2 pronunciation in Arabic learners. More specifically, it attempts to examine whether native raters' reliability and accuracy is different than that of nonnative ones when it comes to evaluating pronunciation of Arabic sounds by American learners. The paper is organized as follows. Section 2 reviews the literature on factors affecting raters' reliability, amongst which is the nativeness of the rater. Section 3 details the methodology of the experiment and shows a correlational relationship in the judgments of native and nonnative ratings. The results are discussed in Section 4, and a conclusion of the paper in Section 5.
Literature Review
Numerous studies have investigated various factors and variables that affect raters' reliability. Variables such as native vs. nonnative, trained vs. untrained and whether raters have different linguistic, EFL or occupational backgrounds have been considered. Barnwell (1989) states that students are usually assessed by "naïve" native speakers. His study of L2 American learners of Spanish yields that naïve nonnative speakers of Spanish who have not received any kind of training on how to assess (and what to assess precisely) are much harsher than ACTFL-trained raters. The reaction of native English speakers and native Spanish speakers to recorded speech of Puerto Rican learners of English is also examined. Fayer and Krasinski (1987) find out that native speakers of Spanish are less lenient than native English speakers. However, raters in the Fayer and Krasinski's study are neither trained teachers of English as a second language (ESL), nor are they trained in assessment. Shi (2001) examines native and nonnative EFL raters' judgments of Chinese students' English writings. Both groups of raters in her study are given the same scoring criteria and the same essays in order to find out if similar scores are obtained or not. The results of her study conclude that native and nonnative English teachers render similar scores; there were unmentionable differences in the evaluation of the Chinese EFL students' essays. Nonetheless, it is shown that nonnative teachers "attended more positively in their criteria to the content and language, whereas the native Chinese teachers attended more negatively to the organization and length of the essays" (Shi, 2001: p. 1).
Other studies focus on the background of raters as an important facet, and whether training raters yields sustainable reliability. Jacobs, Zinkgraf, Wormuth, Hartfiel, & Hughey (1981) maintain that training, accompanied by the use of a well-defined scoring rubric, contributes to neutralizing the differences in raters' backgrounds, and to "ensure more consistent interpretation and application of the criteria and standards for determining the communicative effectiveness of writers." (Jacobs et al., 1981: p. 43). Shohamy, Gordon & Kraemer (1992) investigate inter-rater reliability of professional EFL teachers and nonprofessional ones, "lay raters". Their study examines raters' scores before and after training. It is concluded that while training has an ostensible effect on raters, since trained raters demonstrated higher inter-rater reliability, teaching background Open Journal of Modern Linguistics as a factor held little significance. However, the findings of Hadden (1991) contradict Shohamy et al. (1992) as well as Barnwell (1989). Hadden concludes that non-teachers produce higher ratings of students' second language English communicative skills than do ESL teachers (see also Lumley and McNamara, 1995).
To further explore whether training plays an important role in improving consistency in raters' assessment of ESL compositions, Weigle (1994) conducts an experiment on 16 experienced and inexperienced raters, with experience defined as participating in a previous rating process. The raters' assessment of the compositions in pre-and post-training sessions shows that "the training process was effective in two important positive ways: 1) it helped the raters understand and apply the intended rating criteria, and 2) it modified the raters' expectations in terms of the characteristics of the writers and the demands of the writing tasks" (Weigle, 1994: p. 214). Brown (1995) explores the effect of various linguistic and occupational backgrounds on the assessment of oral language tests for a Japanese Language Test designed for tour guides. Thirty three native and near-native raters are employed with multiple different backgrounds either in teaching Japanese as a foreign language, or in tour guiding (the first being the linguistic and the latter being the occupational experience). Results confirm that "there is little evidence that native speakers are more suitable than nonnative speakers or that raters with teaching background are more suitable than those with an industry background" (Brown, 1995: p. 13). Similarly, in the speaking assessment of four L1 Japanese students, Caban (2003) observes linguistic and educational training factors that might have a direct effect on raters' assessment. She maintains that interviews are always rated by human observers, and as such, it becomes almost impossible to avoid subjectivity. Bias, she emphasizes, can be caused by factors like age, L1, gender and educational background. In her study, 83 raters are asked to rate four nonnative students. The students are interviewed and asked by the raters to perform certain role-plays. The scoring categories for each student included grammar, fluency, pronunciation, content, appropriateness and overall intelligibility.
The findings indicate significant differences amongst the raters; it is suggested, however, that these differences are not directly related to the L1 background of the raters nor are they related to their academic training.
Still others such as Charney (1984) have argued that training may have negative effects on raters. "Raters can be trained to agree on ratings" or they could agree to a certain rating based on superficial aspects of the text such as the stylistics, handwriting and organization rather than the content. Similarly, it could be argued that training raters might lead them to become so restricted in their judgments to the provided scoring rubrics, thus ignoring any relevant past experience in the field that might be essential in the rating process. This, nonetheless, can be circumvented if raters were more directly involved in the preparation and construction of the rubrics. Raters could decide, drawing back on their expe-Open Journal of Modern Linguistics rience in the field, how to develop a suitable rubric that provides better criteria for evaluation.
To sum up, all of these studies taken together imply that the use of appropriate scoring rubrics and proper training of raters will most likely lend more consistency, hence reliability to raters' judgments. While notable differences amongst raters with dissimilar linguistic, educational and occupational backgrounds exist, inconsistencies in assessment are still found among uniform raters who share common backgrounds and are equally trained in assessment and/or teaching, hence, the need to examine the effect of native language on raters' ability to produce reliable judgments of L2 pronunciation in this study.
Method
The purpose of this paper is to contribute to the afore-mentioned wealth of literature on raters' reliability. In particular, the native vs. nonnative factor is taken up here. The study explores the inter-rater reliability of both native and nonnative Arabic raters in their assessment of L2 learners of Arabic pronunciation.
The following research question is of concern to this study: 1) RQ: Do native Arabic raters yield more consistent judgments than nonnative ones?
To address the research question properly, two main hypotheses will be tested: 2) H1: Inter-rater reliability will be significantly higher in native raters than in nonnative raters.
3) H2: There will be no positive significant correlation between overall native raters' and nonnative raters' judgments.
The particular effects of the native and nonnative raters are chosen mainly for three reasons. The fact that studies, which looked at these factors, show conflicting results necessitates further research in this regard. Second, very few studies, if any, have been conducted on raters' assessment of L2 Arabic students' pronunciation. Third, this study aspires to examine the claim that native raters fair better on reliability than their nonnative peers because of their knowledge and perception of the L1; since various studies have not so conclusively established this claim, it remains a speculation. Note that Hypothesis 2 here follows from H1; if inter-rater reliability is high amongst native raters compared to nonnative ones, then it is expected that judgments of the native and nonnative groups will be different, and not be correlative.
Participants
Four raters took part in this study. Two of them are native speakers of Arabic, one with a college degree and one with a master's. Both native raters (NR) are well-educated in Modern Standard Arabic (MSA) and only one of them has some teaching experience through private tutoring. The other two raters were native speakers of American English. One of them holds a master's degree in teaching Arabic as a second language and the other has just graduated from col- ington DC. None of the raters has ever participated in a rating process before and, given their little background in assessment, they can be fairly described as professionally untrained. The raters were invited to take part in this study and upon their consent, a simple interview was held with each one of them to collect some demographic and background information.
Materials
The material for this study is based on data drawn from another ongoing work on the production of Arabic guttural consonants by American learners of Arabic. The experiment examines the accuracy of ten L2 students' pronunciation of
Data Analysis
All four raters are asked to take part in the rating process by listening intently to each utterance and providing judgments of the pronunciation stimuli produced by the ten American L2 learners of Arabic. The raters have been instructed that the rating process is limited to the pronunciation of guttural consonants only, thus excluding other aspects of pronunciation such as vowel quality, voicing or even other nonguttural consonants. A scoring rubric as criteria for raters to follow in their assessment task has been provided. The scoring rubric included a scale from 1 -5 as in Table 1.
Raters listened to each of the 100-recorded utterances (10 words per subject); they were given five seconds per item to rate with no restriction on replays. The rating session lasted fifteen minutes approximately for each rater and their judgments were then recorded and documented to reveal any discrepancies in the assessment of scores, if any.
1 Guttural sounds are produced at the back of the throat. The guttural consonants in Arabic include: the glottals /ʔ/, /h/, the uvulars /q/, /χ/, /ʁ/ and the pharyngeals /h/, /ʕ/. Gutturals are generally considered rare sounds in language, and are often more difficult to pronounce by American learners of Arabic. Open Journal of Modern Linguistics An analysis for each rater group as well as a comparison of the ratings given by the native and the nonnative speakers in the experiment were conducted. Native raters gave out higher ratings of the students' pronunciations (61.3%) than nonnative raters (55.4%). However, the mean difference between the two raters in each rater group varied significantly. The two native raters had a mean difference of 0.07, while the nonnative raters had a higher mean difference of 0.96.
The divergence between the performance of the two rater groups measured to 0.89. Table 2 represents the mean difference between the pair raters in the native group for each subject. The inter-rater reliability between the first NR and the second NR can be shown by computing the correlation of scores provided by each rater. This is represented in Figure 1. The scores in Table 2 were submitted to a Pearson's correlation coefficient measure, a parametric test of the strength of the relationship between two variables. Results are shown in Table 3. The test produced a highly significant measure of covariance, r = 0.90, p < 0.001. This indicates a high level of correlation and reliability between the two native raters' assessment scores. The two ratings overlap to the extent of r 2 (0.817), which is a strong relationship.
The mean difference between the two raters in the nonnative group is shown in Table 4. Interestingly, the extent to which the two raters agreed in their judgments becomes clear when the correlation between the NNRs pair is examined as illustrated in Figure 2. The coefficient value in Table 5 is low and indicates a poor correlation between the two ratings of the nonnative subjects, r = 0.009, with extremely low overlap (r 2 = 0.00008). However, it should be noted that this lack of covariance in the nonnative data is not significant as it failed to reach the significance level, p > 0.05. To calculate the inter-rater reliability between the native and nonnative groups, the scores of the native raters as well as the nonnative raters were collapsed and averaged. Mean differences were computed to show the degree of divergence between the two rater groups. Table 6 summarizes the average scores of each rater group and the mean differences. It is clear from Table 6 that the greatest discrepancy between the two rater groups exists in their averaged scores for subject 5 as indicated by the mean difference of 1.05. The smallest difference of 0.15 between the two rater groups is found in their ratings of subjects 3 and 8. The inter-reliability or agreement between the two rater groups is illustrated in the correlational graph in Figure 3. A two-tailed Pearson's covariance test reveals a moderate level of correlation between the native and nonnative rater groups' scores (Table 7). The Pearson's coefficient value here suggests a significant level of correlation between the two rater groups, r = 0.73, p < 0.05, albeit moderate. A value of 0.60 and above is generally accepted as a reliable measure of correlation in the field of second language research. In other words, the two group's ratings overlapped 53% of the time (r 2 = 0.5329).
Discussion
Recall that the main research question of this paper asked whether native Arabic raters yield more consistent judgments than nonnative ones. Hypothesis 1 predicted that native inter-raters' judgments of American L2 learners of Arabic would be more reliable than would judgments of nonnative raters. It is expected, therefore, that the rating scores provided by the native raters would be closely matched, or highly similar. This is borne out in the results. The numbers in Table 2 show that the ratings scores of native rater 1 (NR1) averaged 3.1 out of 5.0 (62%), while native rater 2 (NR2) averaged 3.03 out of 5.0 (60.6%); in other words, NR1 yielded slightly higher ratings than NR2. The mean difference between these two averaged assessment ratings is quite minimal 0.07, indicating comparable performance between the two raters. The reliability between the two native raters is significant r = 0.90, p < 0.001, with a high degree of overlap, r 2 (0.817), i.e. the two ratings overlapped 81.7% of the time. Hypothesis 1 also gives rise to the prediction that nonnative raters' assessment scores of American L2 learners' pronunciation would be variant. Looking at Table 4, nonnative rater 1 (NNR1) averaged 2.29/5.0 (52%), and nonnative rater 2 (NNR2) averaged higher, 3.25/5.0 (65%). The mean difference between the two nonnative raters amounted to 0.96, which is quite large and suggests that their assessment scores lacked covariance, hence reliability. The degree of overlap is a meager r 2 = 0.00008, meaning only 0.0008% of similarity. However, it is important to note that this lack of reliability in the nonnative ratings has not reached the level of significance, r = 0.009, p > 0.05, and can only therefore be regarded as expressive of a tendency.
The NR group achieved much higher level of inter-rater reliability than did the NNR group. In other words, as the figures indicate, consistency in the judgments of the two raters in the NR group is almost 14 times higher than it is in the NNR group. The performance of NNR2 was very close to that of the NRs; however, it appears that because NNR1 gave such low ratings, the reliability within the NNR group suffered much. These results are partially supportive of Hypothesis 1, which posits that native raters yield more inter-rater reliability in their judgments of L2 pronunciation than do nonnative ones.
Hypothesis 2 states that no positive significant correlation between native and Open Journal of Modern Linguistics nonnative rater groups should exist in their judgments of L2 pronunciation.
Thus, it is predicted that the ratings of the two rater groups would be different.
A cursory look at the results in Table 6 reveals that the native group produced an average of 3.06/5.0 (61.3%), and the nonnative rater group produced an average of 2.77/5.0 (55.4%). The mean difference between the two rater groups amounted to 0.295, which is decent and suggests some correlation. However, this level of correlation although significant is moderate, r = 0.73, p < 0.05, since it amounted to only 53% of the time, r 2 = 0.5329. Thus, it is concluded that the results of this experiment disconfirm Hypothesis 2. The native rater group provided higher overall ratings of American L2 learners' pronunciation data than did the nonnative one. The ratings of the two groups in fact significantly showed small but positive correlation. This shows that the native and nonnative subjects preformed similarly on the judgment task, which is contrary to what Hypothesis 2 assumes. It is important to note that the level of similarity between the two rater groups is slightly above chance level (53%); in addition, recall that the performance of NNR2 is exceptionally higher than NNR1, and is almost akin to that of the native raters'. This could have arguably led to the ratings of the NNR group being similar in some degree to the ratings of the NR group. Hence, due to the small number of subjects and the large difference in performance between the nonnative raters, it is best to interpret such correlation as suggestive.
Although speculative, the reason why native raters performed better than their nonnative peers could be attributed to the fact that native raters have acquired a fully developed sense of the language. Given that almost all native speakers transition through very similar developmental stages in the acquisition of their first language, their intuition as well as their acute ability to perceive and categorize the sounds of their language are quite heightened and might have contributed positively to their more unified judgments. It can be argued, on the contrary, that nonnative raters lack this perceptual discriminability of second language sounds and may, therefore, resort to guesstimating in some cases, which definitely leads to arbitrary ratings, as seen presumably in the ratings of NNR1 in this study.
There are subtle differences between the productions of different sounds and sometimes these variations are hard to perceive. It is commonly assumed that performance amongst nonnative speakers of a certain L2 background is highly variable from one person to another. In rare cases do we find some nonnative speakers whose command of a foreign language, especially its phonology, is considered exceptional (cf. Ioup, Boustagui, El Tigi, & Moselle, 1994;Long, 1990;Moyer, 1999;Patkowski, 1994, for studies that report on cases of ultimate attainment in phonology). The variation in the performance of L2 speakers is gradable and could be reflective of the discrepancies in their ratings. On the other hand, almost all native speakers achieve one uniform level of nativeness. It is impossible to say that person A is more native-like than person B when both A and B are native speakers of the same language. True they may differ in their eloquence or oration skills but their ability to perceive and produce their L1 Open Journal of Modern Linguistics speech sounds should be comparable.
Alternatively, it could be the case that nonnative raters may have developed a high sense of caution towards perceiving and producing L2 sounds as expressed in the harshness of NNR1 who notably produced the lowest rating judgment amongst all native and nonnative raters. It is possible that being over corrective, NNR1 dismissed most of the pronunciation stimuli as being less native-like thus assigning them the lowest of scores. Whatever the reason might be, the results of this study seem to be in contradiction of the findings of Fayer and Krasinski (1987) who reported that native speakers of Spanish were much harsher than nonnative English ones. Native (Spanish) raters produced lower ratings in their evaluation of Puerto Rican L2 speech than did nonnative (English) raters. Nonetheless, the current results appear more in line with those of Brown (1995) who found little evidence that native speakers are more suitable than nonnative speakers in the assessment of oral language tests for the tour guide Japanese Language Test. That is, both native and nonnative rater groups in Brown's study performed quite similarly, and no significant difference, just as in this study, between them was observed. The findings obtained here are also supported by Shi (2001) who concludes that native English teachers and nonnative ones rendered similar rating scores, and that marginal differences between the two rater groups in their evaluation of the Chinese EFL students exist. Kobayashi (1992), however, provides conflicting results of how English native speakers were more accurate than Japanese native speakers in their corrections of ESL compositions written by Japanese students.
Conclusion
This short paper sets out to examine the effect of native vs. nonnative as a factor on the assessment of L2 pronunciation. It explores whether the assessment of American L2 learners of Arabic speech by native Arabic raters yields more inter-rater reliability than by nonnative Arabic raters. A rating experiment in which native and nonnative rater groups provided judgments of L2 Arabic students' utterances is carried out. Results show that while native raters exhibit significantly higher inter-rater reliability with a large degree of correlation, nonnative raters' poor reliability and lack of correlation are insignificant. Findings also suggest that overall native and nonnative groups behaved similarly in their judgments of L2 pronunciation task, although no strong correlation is obtained.
Although the results of this study reaffirmed former studies, more conclusive evidence is still needed. The small number of raters in this experiment coupled with the nature of the stimuli and the raters' diverse linguistic background may have contributed to inter-reliability of the raters.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper.
|
2020-10-19T18:12:44.200Z
|
2020-09-14T00:00:00.000
|
{
"year": 2020,
"sha1": "a578247e8d3420ffa67d60da4344105a8953fe4a",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=103284",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c655f209452a49b5be0dc22850f15ce905581df8",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
246364367
|
pes2o/s2orc
|
v3-fos-license
|
Essentials of Neonatal–Perinatal Medicine Fellowship: careers in Neonatal–Perinatal Medicine
The clinical and academic landscape of Neonatal–Perinatal Medicine (NPM) is evolving. Career opportunities for neonatologists have been impacted by shifts in compensation and staffing needs in both academic and private settings. The workforce in NPM is changing with respect to age and gender. Recruiting candidates from backgrounds underrepresented in medicine is a priority. Developing flexible positions and ensuring equitable salaries is critically important. Professional niches including administration, education, research, and quality improvement provide many opportunities for scholarly pursuit. Challenges exist in recruiting, mentoring, funding, and retaining physician–scientists in NPM. Creative solutions are necessary to balance the needs of the NPM workforce with the growing numbers, locations, and complexity of patients. Addressing these challenges requires a multi-faceted approach including adapting educational curricula, supporting trainees in finding their niche, identifying novel ways to address work/life integration, and attracting candidates with both diverse backgrounds and academic interests.
INTRODUCTION
Neonatal-Perinatal Medicine (NPM) has evolved with shifts in staffing, workforce composition, and promotional pathways. A historical perspective and current opportunities/threats are offered to prompt discussion amongst trainees charting career paths, program directors designing education, and NPM leaders guiding faculty recruitment, development, and retention to ensure NPM remains a rewarding career for future generations.
EVOLUTION OF NPM PHYSICIAN PRACTICE
Since the recognition of the field in the mid-1970s [1], the measure of a successful NPM career has been recalibrated in response to medical advances, patient safety metrics, changing compensation, and promotional criteria. The original academic ideal, a "triple threat" physician, excelling in research, teaching, and patient care, was limited by clinical supervision and documentation standards beginning in the 1980s [1,2]. During the 1990s, community NICUs offered careers with patient care as a greater focal point [3]. Despite reports of improved morbidity and mortality in infants born at level III+ facilities [4,5] and efforts to regionalize perinatal care, NPM positions remain decentralized, particularly in states without limitations on hospital expansion [5]. Hybrid practice models, with providers cross-covering multiple NICUs, reduce distinctions between academic and private practice.
Neonatologists are meeting increased clinical needs across practice models. Attending in-house call has become more prevalent [6] and is associated with improved resuscitation outcomes and decreased NICU admissions [7,8]. A recent fellowship program survey noted that 50% of programs provide routine in-house attending coverage (with 22% as needed for acuity or fellow inexperience) [9]. Neonatologists with home call may supervise advanced practice providers (APPs) at multiple clinical sites or cover lower acuity deliveries, as only 15% of general pediatricians/family physicians cover deliveries outside of rural settings [10].
Financial considerations can influence career planning [13]. Following the dissolution of fee-for-service compensation in 1983, 58% of surveyed neonatologists considered changing subspecialties due to "excessive clinical loads" and "inadequate compensation," although providers in academics and higher acuity NICUs reported higher job satisfaction [14]. In 2019, the median inclusive compensation for neonatologists was $256,000 [12]. While neonatologists are currently among the highest-paid pediatric subspecialists, the 2021 Medicare Physician Fee Schedule may reduce future compensation [15]. Over the past decade, academic healthcare systems employed~50% of neonatologists [12], compared to 38% in private practice [16]. Neonatologists are not equally distributed across states or population centers, with~84% practicing in metropolitan areas [12]. These factors impact compensation, with private practice neonatologists reporting $15,000 greater annual compensation [12] and neonatologists in Northeast, Mid-Atlantic, and metropolitan areas earning less [17]. Female neonatologists also receive 3.7% less annual compensation, potentially resulting in a net loss of $430,000 over a 35-year career [11].
CURRENT PHYSICIAN WORKFORCE
Neonatology has had the most growth of any pediatric subspecialty over the past 20 years [18]. The number of NPM fellowship programs and first-year fellowship positions increased between 2016 and 2020, but fellowship applications decreased by 17% between 2015 and 2019 [19]. Although the number of practicing neonatologists has increased, the average age rose from 53.7 to 57 years between 2008 and 2015 [17], suggesting an aging workforce. Over the last 20 years, female American medical graduates drove the growth in the field [18]. Women now represent 75% of NPM fellows. International medical graduates represent 20% of the NPM workforce. Historically, the ABP has not collected racial and ethnic demographic data, but this information is necessary to assess trends in underrepresented in medicine (URiM) in NPM.
PATHWAYS TO ACADEMIC PROMOTION
NPM providers may participate in numerous types of scholarly activities, developing expertise in laboratory-based, translational or clinical research, quality improvement (QI), bioethics, education, global health, or epidemiology. However, academic faculty with non-traditional areas of scholarly pursuit may encounter illdefined promotional criteria and inconsistent definitions of tracks across institutions. Challenges securing sufficient funding, mentorship, and protected time to complete projects exist.
Clinician
Clinical faculty are instrumental to productive divisions and clinical programs. Although the responsibilities are intuitive, there is no uniform promotional pathway for "clinicians." Variability exists with respect to clinical time allocation, expectations for trainee education, institutional citizenship, and scholarly productivity.
NPM clinicians may develop expertise beyond direct patient care, such as through patient safety and QI initiatives. Numerous certificate programs are offered in this area, such as the Institute for Healthcare Improvement's online certification [20]. Other opportunities include biodesign and clinical informatics. In biodesign, clinicians work with interdisciplinary teams from nursing, engineering, design and computer science to develop new technologies to improve patient monitoring, evaluation, and diagnosis [21]. Clinical informatics is the scientific discipline focused on the effective use of biomedical information and knowledge in healthcare. There has been an exponential growth in the need for physicians trained in clinical informatics with the transition to electronic health records (EHR) [22]. Experts can transform healthcare by analyzing population data and designing or implementing communication systems to enhance individual and population health outcomes. The American Board of Medical Specialties approved clinical informatics as a board-eligible subspecialty in 2011. Beginning in 2023, board eligibility will require the completion of an ACGME-approved clinical informatics fellowship.
Clinician-educator (CE) CE traditionally describes physicians with responsibility for trainee education [23][24][25]. CE tracks are typically non-tenured without research funding [24,25]. Attempts to define success in CE tracks have frequently relied on traditional promotional pathways, using productivity definitions created for physician-scientists (PS). As such, odds of holding a higher academic rank are lower for non-PS [26], and faculty devoting >50% of the time to clinical care reported prolonged time to promotion [27]. Recently, the CE role was redefined with a focus on an educational scholarship [28], which dovetails nicely with the shift to competency-based education incorporating well-defined assessment and accreditation standards [29][30][31].
Developing career pathways with clear metrics can delineate the CE identity. In one survey, 55% of respondents indicated master's level training was one way to gain relevant expertise [32], resulting in the development of graduate programs, including master of health professions education and master of medical education. In 2006, the Association of American Medical Colleges Group on Educational Affairs defined educational scholarship and identified specific activities that support academic promotion (Table 1) [33]. Having evidence of excellence in education by documenting quantity and quality of activities and evidence of engagement within the educational community by demonstrating contributions to its body of knowledge were identified as core principles of education scholarship [33].
Physician-scientist
Opportunities and challenges facing PS in contemporary academics are described previously in this series [34]. Limited institutional support, mentorship and funding created a "leaky pipeline," diminishing the number of neonatal PS. Interestingly, most pediatricians with R01-equivalent research awards are clustered at just 15 institutions [35]. Perceptions that PS receive lower compensation and have challenges with work-life balance may make research careers appear less attractive and feasible. Attrition of current PS and difficulty attracting new PS threatens to stall medical advancements.
Despite these challenges, it is critical to develop and promote opportunities to recruit and retain future PS. Laboratory-based neonatal physiology research has been rooted in large animal models with a focus on molecular biology and gene knockout models in rodents. Advances in technology have created expansive growth in stem cell and gene-editing research, as well as computational and systems biology research in genomics,
FUTURE OF NPM PHYSICIANS/PROVIDERS
The field of NPM has a bright future, filled with potential for career growth and professional satisfaction. Despite threats to the field, many opportunities and creative solutions are on the horizon. Table 2 presents a summary of key opportunities and perceived threats to the specialty.
Building and maintaining the workforce The neonatology workforce faces increased clinical demands. Clinical networks can combine multiple nurseries under a single group, sometimes requiring providers to staff numerous hospitals across wide geographic areas. Varying team composition (with fewer APPs/trainees at community NICUs), long commutes, and substantial amounts of home call may negatively impact work-life integration in this model. As fewer general practitioners attend in the newborn nursery, coverage frequently falls to NPM networks. In addition, many NICUs reported sustained increases in census related to increasing survival of infants born extremely preterm or with significant congenital anomalies [38]. More providers are necessary, but meeting this demand presents a challenge. Many institutions increasingly rely on APPs to meet clinical needs. Despite the workforce expansion and clinical expertise offered by these providers, their incorporation into NICUs introduces challenges related to clarification of roles and responsibilities for direct clinical care, providing thorough training specific to neonatology, and ensuring opportunities for career advancement. In addition, ABP data affirms an aging workforce. Senior neonatologists may seek accommodations to decrease the physical demands of the job, such as limiting call/ clinical weeks, team acuity, or travel to remote clinical sites. If accommodations are not granted, these providers may seek nonclinical positions or retire early, shifting the clinical burden to Personal and professional satisfaction • Adding many quality-adjusted life years with NICU interventions [51] • Caring for infants and families • Mentoring future generations of pediatricians/neonatologists • Contributing to the body of research to optimize care provided • Adaptation of clinical schedule for senior providers • Leveraging growing technologies, such as telemedicine and EHR to positively impact both patient care and work-life balance • Higher salary than other specialties in pediatrics • Excellent job security younger faculty. Patients, families, trainees, and junior faculty will lose the benefit of these providers' years of valuable experience.
Similarly, continued investments are needed in neonatologists' professional development and workplace resources to avoid burnout. Ethical dilemmas and unmet expectations for survival or quality of life take an emotional toll. Factors that may promote resiliency include technological solutions or additional staff support to ease documentation requirements, along with the creation of leadership opportunities [39].
Diverse opportunities for training and professional advancement The evolution of clinical practice models now permits novel, diverse avenues for professional advancement within NPM.
Neonatologists can choose to excel in clinical care, providing holistic care for NICU families and leading interprofessional medical teams. Ensuring ongoing, evidence-based education for APPs, as well offering opportunities to participate in QI, teaching, and research, may help engage and retain these individuals in the NPM workforce. Some neonatologists find passion in education. Adept educators adjust their strategies to incorporate alternative methodologies, such as simulation and flipped classroom, to meet the needs of present-day trainees learning to think critically and navigate academic terrain. Educators must partner with PS colleagues to ensure physician trainees receive the optimal balance of clinical exposure and opportunities for scholarly investigation. Researchers may find their passion in identifying new treatments for medically complex patients, or ensuring that follow-up of NICU graduates informs future care. The creation of national and international collaboratives with access to large patient populations may rapidly advance knowledge and optimize practice.
Neonatologists can refocus their careers to mitigate changing personal needs, enhance work/life integration or pursue new interests. Division leaders may need to adapt existing positions to retain experienced providers and attract trainees, preserving and enhancing the workforce. Multi-site practice models create options for career advancement, including leadership roles at individual sites, as well as oversight for the larger practice's administrative, investigative, and QI efforts.
Expanding areas of scholarly activity for clinicians, CE, and PS often require specialized training and protected time. Customizable fellowship tracks may better support individual career development needs and enhance recruitment efforts. For example, a future clinician might only require 2 years of fellowship, similar to the pediatric hospital medicine fellowship [40]. Since medical advances have increased the survival of medically complex neonates, completing additional training in related subspecialty areas may improve patient care and garner recognition for these uniquely skilled clinicians. Examples include dual board certification or additional fellowship training, for example, in Neonatal Hemodynamics or Pediatric Cardiac Critical Care Medicine [41][42][43][44][45]. Trainees seeking exposure to scholarly activity could pursue the traditional 3-year NPM fellowship, with opportunities to pursue coursework, obtain an advanced degree, and complete scholarly projects. Individuals dedicated to pursuing clinical or laboratory-based investigation may obtain specialized research training at their institution (e.g., PhD) or participate in national programs such as the Physician Scientist Development Program [46].
Opportunities to optimize patient care Institutions can leverage expertise to impact care delivered across networks. Partnerships between tertiary care centers and community NICUs can remain robust through the development of practice guidelines, joint faculty development and QI efforts. These partnerships facilitate access to higher-level care, evidencebased updates across settings, and flexibility in the practice environment.
Black infants have inferior clinical outcomes and increased mortality in comparison to white infants [47,48]. A recent study demonstrated a significant benefit between patient outcomes and physician-racial concordance, and this benefit increased substantially with increasing patient comorbidities [49]. Increasing the diversity of the NPM workforce to accurately reflect the population and optimize patient care in underrepresented and underserved populations remains an area of critical importance. While adopted more quickly in ambulatory specialties, telemedicine is gaining popularity in NPM. During the SARS-CoV-2 pandemic, telemedicine became more widespread, with allowances for billing with remote supervision. This enhanced providers' abilities to cover NICUs from home, and for tertiary centers to function as resources for community NICUs. Telemedicine presents an opportunity to expand expert neonatal care to rural/underserved areas and internationally. Potential uses of telemedicine include retinopathy of prematurity screening using remote digital retinal imaging, echocardiogram review and interpretation, subspecialty consultation, high-risk infant developmental follow-up, virtual family-centered rounds, and virtual education/simulation [50]. Future efforts should be directed towards optimizing and leveraging this technology to positively impact patient care as well as providers' work/life integration.
CONCLUSION
Despite evolving challenges, NPM remains an immensely rewarding career. Neonatologists work collaboratively, treat critically ill patients, educate future generations, and lead scholarly work. Job security is excellent and compensation is higher than in many other pediatric subspecialties. The ability to develop expertise within NPM provides personal and professional satisfaction. National and institutional leaders should build the workforce by retaining experienced providers, attracting future neonatologists, especially URiM and PS candidates, and engaging non-physician providers.
|
2022-01-29T14:41:33.624Z
|
2022-01-29T00:00:00.000
|
{
"year": 2022,
"sha1": "7256e3642f96f960e892b8d8df1bc23c52217cc7",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/s41372-022-01315-7.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f72e3f46d3ce1f96f5a736122996c755d074a6b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
33674371
|
pes2o/s2orc
|
v3-fos-license
|
A focus on pleasure? Desire and disgust in group work with young men
There are a number of persuasive arguments as to why sexual pleasure should be included in sexual health work with young people, including the suggestion that this would provide young people with accounts of gender and sexuality that are more critical and holistic than those presented in the popular media, pornography and current sex education curricula. This paper considers the possibilities for engaging young men in critical group work about sexual pleasure in research and education contexts, drawing on a mixed-methods study of young people's understandings and experiences of ‘good sex’. The paper provides a reflexive account of one focus group conducted with a group of heterosexual young men and two youth educators. It explores some of the challenges to building relationships with young men and creating ‘safe spaces’ in which to engage in critical sexuality education in socially unequal contexts. In this case study, adult-led discussion elicits rebellious, ‘hyper-masculine’ performances that close down opportunities for critical or reflective discussion. Although there are some opportunities for critical work that move beyond limited public health or school-based sex education agendas, there is also space for collusion and the reinforcement of oppressive social norms. The paper concludes by imagining possibilities for future research and practice.
Introduction
For over three decades, researchers and practitionersh ave argued that sexualp leasure should be includedinsexuality educationand sexualhealth servicesfor young people (Fine 1988;Ingham 2005;Centre for HIV and Sexual Health 2009;Allenand Carmody 2012). Broadly, thesearguments suggestthat amore positive and holistic model of sexualhealth that foregrounds the emotional and physical pleasures of sex and relationships,w ould producemore favourable and gender equitable sexualhealth outcomes for young people. Much of this work has focused on the benefits of including pleasure in sex education programmes for young women, arguing that this woulde nable educators to create 'safe spaces' (Fine1 988, 35)i nw hich young women could explore the 'discourses of desire' that researchers have frequently observed to be 'missing' from sexuality curricula and classroom practices.I ncreasingly, however, critics have arguedt hat the inclusion of pleasureinsex education and sexualhealth servicescould alsobepotentially transformative for young men by creating opportunities for them to explore accounts of gender and sexuality that are more critical, diverse and equitable than thosepresented in popular media, pornography and current sex education curricula (Allen 2005;Beasley 2008).
Drawing on astudyofyoung people's understandings and experiences of 'good sex' in London, England, this paper considers the possibilities and challenges of engaging young men in the 'pleasure project' in research and education contexts. The paper focuses on one focus group that Ic onducted as part of ab roader mixed-methods study with ag roup of young, heterosexual men, ay outh worker and as exual health outreachw orker. In the paper, Iprovide areflexive account of this group interaction as away of exploring what it means to 'work with men and boys' and engage them in criticaldiscussion of gender and sexuality. What happenswhenyou put togetheragroup of young men and ask them to talk with each other and with adultprofessionals about 'good sex' and sexualpleasure? Why would aresearcher or apractitioner wanttodothis and what would be the challenges and benefits of doing this for young people, for researchersa nd for practitioners?
The pleasure project Twenty-five years ago, Michelle Fine (1988) used an ethnographic studyofyoung people in New York High Schools to argue that therewas a'missing discourse of desire' in the US public education system. In thisi nfluential article, Fine offers an analysiso ft he public discourses of sexuality that characterise debates about sex education in the USA, summarised as sex as violence , sex as victimisation, sex as individual morality and the discourseo fd esire. Fine arguest hat whilst the first three discourses are in abundance in US secondary schools, the fourth is largely 'missing'f rom 'official' sex education curricula and from sex education classrooms.T his framing of sexuality (around risk the risks of male sexualv iolence, unwantedp regnancy and sexuallyt ransmitted infection) means that young women are educated 'as the actual and potential victims of male desire' (Fine 1988, 32), encouragedtosay 'no' to sex and protect themselves from its potentially harmfulconsequences, rather than explore and understand their sexual bodies and desires.
Although the 'discourse of desire' seldom appeared in US school classrooms, Fine found that it frequently emergedinher conversations with her young female participants -'drop outs' from apublic high school.For example, there was Betty, who said, 'I don't be needin' aman who won't give me no pleasurebut take my money and expect me to take care of him' (Fine 1988, 35). Fine argues that for these young women, 'sexual victimization and desire coexist' (35) to producesexual meanings and experiences that defy the victimisation thesis.Inthe context of social ambivalence about femaledesire that separates the female sexualagent from the female sexualvictim, Fine arguesthat young women need access to safe spaces in which to explore their desires and to develop asubject position from which they can negotiate the pleasures and dangerst hat they face in their everyday lives and relationships (Vance 1984). Without access to these spaces to develop an empowered sexualsubjectivity, Fine argues, young women are morevulnerable to unwanted or unsafe sexualactivityand sexualviolence(Fine1988; Holland et al. 1998).
Since the publication of Michelle Fine's paper over 20 years ago, feminist scholars have continuedtodocument the absence of desire from health and education programmes and call for its inclusion in work with young people in arange of nationalcontexts (Lees 1986(Lees , 1994Lenskyi 1990;Thompson 1990;Connell 1995;Holland et al. 1998;Tolman 2002;Bay-Cheng2 003;Allen2 004, 2005;Kiely2 005;Fine and McClelland 2006;Beasley 2008;Hirst 2008;Carmody2 009;Casalea nd Hanass-Hancock 2011). Historically, this work has focused on the absence of female heterosexual desire from sex education programmes. More recently, however, writers have documented the absence of queer desires from sex education programmes (Harrison, Hillier, and Walsh 1996;Rasmussen 2004;A llen2 007) and the absence of discourses of masculine desire that imagines male pleasurei nd iverse,holistic and equitable ways (Allen 2004(Allen , 2005(Allen , 2007Beasley 2008). Louisa Allen( 2005), for example, argues that although young men's 2 E. McGeeney S224 (hetero)sexual desires appeartobegiven more space in sexuality education programmes than young women's, this is framedi naheteronormative discourse of 'growing up' and becoming interested in 'the opposite sex' (Allen 2005, 150). Allen argues that this discourseo fa wakeningm ale (hetero)sexual desire, insinuated in information about wet dreams and erections, has regulatory,prescriptive effects for young men. With the absence of equivalent reference to young women's desire, such adiscourse constitutes young men as predatory sexual subjects.
The study
This paper draws on researcht hat sought to critically engage with these debates and consider what it might mean in practice for aresearcher or apractitioner to create spaces within which to explore discourses of desire (Fine 1988)o re rotics (Allen 2004) with young people. The study, conducted between 2009 and 2013, used an incremental, reflexive research design consisting of an initial stage of exploratory and pilot work, followed by three stages of fieldwork using survey, focus-group and biographical interview methods with young peoplea ged 16-25. My aim was to document young people's understandings and experiences of 'good sex' and sexualp leasurea nd to reflexively interrogatet he effectiveness of different research methods for creatings afe spaces within which to engage young peopleinc onversation about sexual pleasure.
Discussion in this paper focuses on onegroup discussion conducted during the second stage of fieldworkb etweens ix young men, ay outh worker,asexualh ealth worker and myself about what counts as 'good' and 'bad' sex. To facilitate the discussion Iused aset of quotation cards, each containing aq uote from ay oung person about 'good sex' or sexualp leasure. For example, 'Good sex is when you are reallyr elaxed and can be yourself. It doesn't matter what happensorwhat sounds you make. It's ok.' Or, 'Good sex has to last long. If he's getting pleasurea nd he stopsa nd I'm there and Ia in't got my pleasurey et -I 'm like -" you're selfish".' The aim of this, and other focus groups conducted at this stage of the research, was to explore how young peoplet alked about good sex in group settings and to use ar eflexive, situated analysis of theseg roup encounters to explore the potential of the group space as aresearchand practice setting for engaging young people in workarounds exual pleasure.
Thes tudy was based in as mall, densely populated locala uthority in North London. Like manyinner-city London boroughs the area has an ethnically diverse,geographically mobilepopulation and high levels of socio-economic inequality; it is listed as one of the most deprived areasi nE ngland (Islington Fairness Commission 2012) and hailed as London'so riginal site of 'super-gentrification' (Butler and Lees 2006, 467). The focus group discussed in this paper,was held at alocal youth centre with six young men who had been meeting weekly with al ocal youth worker (Steven 1 )a nd an outreachs exual health worker (Graham) to take part in aseries of sex education sessions.The young men were aged 17 -21, all identified as heterosexual and were of diverse racial backgrounds. Several of the group were involved in the criminal justice system and most were involved in workingi nformally or illegally from nearby housing estates in an area of high social inequality. Their youth worker Steven, who had known someofthe young men for up to five years, informed me that none of the young men had been able to sustainany period of legal employment or training sinceleaving school aged 16.
They oung men had been engaging in ap articipatorys exualh ealth outreach programme developed by Stevenand Graham in collaboration with the young men. When the sexual health outreach worker, Graham, asked the young men what topics they would Culture, Health &S exuality 3 S225 like to cover in these sessions,t he young men had requested as ession on pleasure, claiming they would like to know moreabout femalesexualpleasure. Ihad met the young men previously whilst accompanying alocaloutreach youth worker to their estate and had asked if Icould comeand observe Graham deliveringthe session. On the morning of the observation, however, it was decided that Ishould lead the session instead, usingthe 'good sex' discussion card activity outlined above. The young men all consentedt ot he discussion beingr ecorded and signed written consent forms. Steven and Graham both participated in the discussion, providing the opportunity for me to both observe professionalp ractice, as well as generating data on the young men's sexualv alues and understandings of 'good sex'. As others have noted, focus groups offerthe researcher the opportunity to generate spoken data on ag iven topic, as well as observingp articipants interacting within ag roup or peer context,g enerating data on active social processes (Kitzinger 1994;Crossley 2002;Barbour 2009). In this paper, If ocus on as ingle group intervention to enable me to look in detail at these processes, examining how the 'local' context of the group interaction is shapedb yt he 'wider societalal contexts' (Phoenix 2008) of social exclusion, class and gender inequalities.
'Speaking from experience': authority,p rotest and play Throughout the session, the young men used jokes, banter and vivid storytelling to engage in the discussion-based activity and explore ideasa bout gooda nd bad sex. The discussion was animated and enthusiastic, lasting for 45 minutes until one of the young men told me he was 'ready to go'. From the outset, the young men were defiant -r efusing to adhere to the 'ground rules' that Graham and Stevena ttempted to establish about not talking over each other and not talking about other people's sexual experiences, as we can see in the extract above. At the end of the session, the group decided it was time to leave, politely dismissive of Graham's attemptst oh old the group togethera nd finish the discussion. One young man got up to leave and another reached over and switched off the audio recorder.Ast he young men were leaving,Steven and Graham told them that they had 'done brilliantly' and commented to me after the session that this was the longest session they had ever managed to have with the group.
Initially, the discussion took somet ime to get going as there were protests about the lack of food provided by the youth worker, complaints that the cups provided were not clean enough, jokesa nd sexuali nnuendos about the doughnutsIhad provided, and protests about sexual health worker Graham's reminder that they 'haven't got to say anything about [their] personal experiences'. The group were so animated that Iinitially held back on giving out the discussion cards, convinced that the young men neededn o promptst os tart ad iscussion about what counts as 'good sex'.W hen Ia sked for their views, however, the young men struggled to respond and there was an awkward moment in which Luke -t he group joker -t ried to tell afunny story but struggled to know whatto say, leaving him open to ridicule from his peers. OnceIh ad handed out the discussion cards, however,t he discussion was thick and fast flowing. Whiley -t he most vocal and dominantgroup member-started the discussion by selecting acard that he claimed to be 'slightly true'. Thecard stated that girls can be 'more emotionally attached' to their sexual partnersthan boys (see above), which lead to along discussion about whethersex is better when you feel emotionally connectedt oy our partner,w hy men might be less emotional about sex than women and whetheragirlbeingincontrol during sex makes you'less of man'. Ilistened, asking occasional questions, whilst Graham and Stevenprobed the young men about why it's harderfor young men to 'be emotional' and why girls might be 'more shy' about taking control during sex.
In this way,t he discussion card activity helped provide as tructure to the discussion and enabled the group to explore and questionideas about sex, gender,contraception and respectability. Throughout the session, there was ap layful tension between the young men's banter and storytelling about the serious questions, comments and advice offered by Steven, Graham and myself, as we tried to pull the group awayf rom their lively performancea nd towards the educational and research aims of the session.
As we can see in the extract below, whenGraham and Stevenattempt to educate the young men about the value of long-term relationships,the youngmen respond by telling funny stories about their own and each other's experiences of casual sex (McGeeney 2015). In this way, the telling of personal storiesemergesinthis group as aform of protest and play; away of amusing each other and contesting the authority of the professionals in the room: Ryan: When you slept with ag irl too many times, boy it's dead.
Steven: But is that 'cos you're not emotionally attached to the girl, when it's dead, or knowing the girl better and then you have some feelings towards her? In this extract the young men respond to Steven's questioning of their sexualvalues by encouraging Marktotalk about his experiences 'last night'. Despite Steven's attempts to steer the group away from this personal sexualstorytelling ('we agreed, we agreed not to talk about otherp eople's experiences!'),t hey continue,r efusing to accept that this common practice could be problematic or that it could be possibletohave knowledge and authority about sex that is not based on embodied personal experience (Holland et al. 1998;McGeeney 2015). As Whiley exclaimed at the start of the discussion: 'How do you know about having good sex if it's not from personal experience?' 'That'show bad it was!': stories of gender, class andd isgust Although the young men explored anumber of ideasabout good sex and sexualpleasurein this group discussion, talk was dominated by accounts of the pursuit of male sexual Culture, Health &S exuality 5 S227 pleasurei nc asual (hetero)sexual encounters. Whiley dominated the discussion and was the mostp rolific storyteller in the group, frequently telling stories of anonymous,c asual sex encounters, as well as talking about his relationship with his girlfriend and the different rules and logic he applies to casualand committed relationship. To my surprise, however, Whiley's stories were largely not storiesofgood sex, sexual desire or successful masculine conquest, but accounts of bad sex and expressions of disgust. This was particularly evident in Whiley's story about agirl he and his friend Trevor both had sex with in alift at anearby local housing estate: Mark: Off meat from Dalston!! During and after the group session If elt disturbed by this story and the misogynistic disgust that Whiley expressed for the faceless, nameless, young women he described. The specifics of the location and the vivid use of sensory, embodied metaphors in this story seemed to produce adisturbingly visceral account of the 'stinking' femalebody. As well as feeling disturbed, Ialso felt perplexedbythis story; why would Whiley have sex with someone for whom he felt such repulsion? When Ia ttempt to questionW hiley's motivation for having sex with this woman ('Why, why did you have sex with her?') and to unravel where pleasurem ay feature in this story, Whiley returns the discussion to the vivid expressions of disgust ('the smell, just something started coming out'). Iw as left confused as to whether pleasureand desire formed part of this story at all and why Whiley and his peerswould celebrate this story of 'bad sex' and subvertedmale sexual conquest.
After the group had finished, Idiscussedthis interaction with the sexual health worker, Graham, who informed me that he had heard Whiley tell this story before. It appeared therefore that this story had particular currency for this group of young men and that part of the pleasurei nt he story was in its (re)telling -t ot he group who rewarded Whiley's disgust with their laughterand to the listening, questioning and un-amused practitioners.
E. McGeeney S228
In their discussiono fy oungm en's use of humour within secondary schools, Kehily and Nayak (1997) suggestt hat collective storytelling can play ac entral role in framing classroom humour and consolidating versions of heterosexual masculinity. In their ethnographic study, the researchers found that certaineventswould be reinvoked by young men for the 'shared pleasureofmutual retelling', elevating the event to amythic status that became akey reference point against which young men would make sense of their identity within the school and the peer group context (Kehily and Nayak 1997, 76). They use the exampleofastory told to them by agroup of young people about astudent who madea 'cock' (penis) out of clayand showed it to anun who worked at the school. They argue that the collective (re)telling of this story within the peer group context functions to consolidate aset of sexual values and version of hypermasculinity,acting as aregulatory reminder and performative rehearsal for desirable behaviour within the male peer group (Kehily and Nayak 1997). Whiley's storyofthe 'slag' in the lift seemed to function in this group in asimilar way to enableand consolidate aparticular version of hypermasculinity predicated on repeated casual(hetero)sexual encounters. It also, however,makes humour from self-abasement, generating collective expressions of humour and disgust that delight the group whilstalso marking out the moral authority of the young men in relation to the 'stinking' bodies of their 'foul' female partners and peers (Miller 1997;Ahmed 2004). As Sara Ahmed (2004) argues, disgust is performative, binding together the speaker and the audience in shared condemnation of the disgusting object, who in this story is Whiley's female sexualpartner and peer (Miller 1997;Skeggs 2005;Tyler 2008).
As well as understanding this story -a nd the focus group encounter as awhole -a san exampleofgender and sexualidentity work, Iwould argue that this was aperformance of authority and protest that was classed as well as gendered. Throughout the focus group, the young men mader egular claims to have done the 'bad' thing -' I've had sex with bare slags' (Whiley), 'I beat on the first date yesterday!' (Mark) -w hilst alsor elishing in visceralm isogynistic language such as 'next bitch',' slag' and 'grease bag'. Whilst sometimes such claims may have been uttered defensively, to establish status within the peer group, the young men also seemed to revel in their 'bad' language and 'foul' sexual exploits as if enjoying performing their transgression for the three adult professionals, each othera nd the audio recorder. In his sociolinguistic studyo fb lack inner-city youth, WilliamL abov (1972)d escribes the waysi nw hich the bad words and images used in misogynistic 'motheri nsults' are used so frequently and with such familiarity that the vividnesso fi mages such as 'your mother ate fried dickheads'( 324) disappears. Labov suggests that the meaningo ft hese sounds would be entirely lost without reference to middle-class normsa nd are used as ad eliberate way of arousing 'disgust and revulsion among those committed to the "good" standards of middlec lass society' (324). In her study of working class femininity, Beverly Skeggs (2004Skeggs ( , 2005 argues that one of the ways in which aclassed position of judgment can be maintained is through assigning the other as 'immoral, repellent,abject,worthless, disgusting, even disposable' (Skeggs2005, 977). Following from this, one of the most effective ways to deflect beingdevalued is 'to enjoy that for which you know you are being condemned' (976) -t his involves not contesting or deriding authority but refusing the authority of the judgmentand the value system from which that judgment emerges.
Whiley explicitly directs his mythicstory to the sexualhealth worker Graham ('I swear to you Graham') and part of the humour in his performance is perhaps the way in which the story -t old in this way -s ubverts the authority of sexualh ealth discourses voiced by Graham during the discussion and the values ystem from which this emerges. Whiley's story enables him to educate Graham about bad sex, and in doing so he claims aposition of Culture, Health &S exuality 7 S229 authority in the peer group, in relation to Graham,youth worker Steven and me and as a moral authority on this young woman'ss exuality and body. As the young men were leaving the room at the end of the group, one of them remarked, 'Ester will never come back again' -a pparently aware, although Ihad not voiced this in the group, that Iwould object to their storiesa nd arguably confidentt hat their attemptt om aket hemselves objectionable and to refuse the authority of my judgement had succeeded.
'You knoww hat type of girls Il ook for?': stories of desire and (dis)respect My analysis of Whiley's story of the 'foul' girl in the lift suggests that there are implicit class, as well as gender, inequalities at play within the group context. In the following extract, these class inequalitiesa re madee xplicit as Stevena nd the young men seem to momentarily acknowledge the young men's socially excluded and disadvantaged location: Graham: So, agirl who works is more like -agirl who doesn't work is more likely to sleep around do you think?
Whiley: Yeah, agirl that don't work, just like, on the road, what's she doing, she obviously more likely to just be stepped out, sleeping about and that innit? Ag irl that's obviously working, whose got something -o bviously something to do with her time. Like that would be the girl that would be more likely to be wanting arelationship and aproper life innit? Not just going around, sleeping about.
Iremember being shockedbySteven's comment to Luke -' What do you think they will see in you?' -t hat seemed to slip out before Steven could stop himself. Although the boys smoothedover the awkward moment with their laughter, Steven's comment laid bare the gaping inequality between the boys' current social exclusion and the 'culture of professionalism' (Young 1990, 58), which they aspire to access through their future sexual relationships. Steven's comment also revealed the inequalitiesofage and professionalism that structuredthe power dynamic betweenSteven and the boys that could not be so easily dislodgedbeyond the boundaries of this group encounter; although the young men are able to claim authority within this peer group setting -c hoosing whent os tart and end the session, claiming respectability and value among their peers-this sense of authority and esteem may not translate easily into othersocial and institutional spaces.
Ryan's suggestion that professional, workingwomen have moreself-respect seemsto momentarily acknowledge the hierarchy of respectability that positions the young men and their young, repellent, jobless,f emales exualp artners as inferiora nd excluded from respectable, desirable, middle-class professionalism.W hiley quickly closes down this uncomfortable moment, however,t hrough telling an ew hyperbolic story of disgusting female excess, thus re-establishing the young men's -p recarious and situationally specific -m oral authority on the boundaries of good, respectable sex.
'You gottaget the rose petalsont he floor': stories of female pleasure anddesire In this encounter, possibilities for exploring alternative accounts of good sex to the 'quick beat' in the local park or housing estate emerged in response to my questions about female sexualp leasure. For example, whenIask the young men 'how do you give aw oman pleasure' the group, led by Fats,t hrow out as tream of sensual images and sounds to collectively constructapastiche scene reminiscent of romantic comedy or eroticat hat delights the whole group: Fats: First of all, set the mood right.
[laughter and talking]
Ester: OK. Go on.
Fats: Turn out the lights, put ao ne, two candles here and there [Ryan: Ik now that, Ik now that!], you gotta getting the lavender going. Culture, Health &S exuality 9 S231 Luke: You get the bubble bath running, you get the slow mellow beat marches playing in the background, like the music is bare low [ sings]-with yoooou.
Fats: Then when she steps into the yard, she knows what month and time it is. And from there, that's it, isn't it. She should be aroused from morning.
Heret he young men draw, not on their own experiences or scenes from their local communities, but on selected images from popular culture. Thed ominantp atterno f affective practice (Wetherell 2012) is no longero ne of humour and disgust, buto ne of sensuality, humour and playful fun. As imilar pattern emergedw hen Ia sked the young men how they would feel if their female partner had an orgasm during sex. Rather than responding with apersonal sexual story of whathappened 'last night' or 'last week', Luke draws on the metaphor of Simba from the children's Disney film TheLion King: Ester: So if you were, if the girl you're sleeping with has an orgasm, how does that make you feel?
Fats: Good for her. [laughter] Luke: That's the goal innit, that's the goal of ...
[laughter]
Luke's comic performanced elightst he group and enables him to defy ridicule from Fats and present av ision of how to incorporate ideas about female pleasurei nto the dominantgroup narrative of male sexual prowess and power. As theseexamples suggest, invitations to talk about female pleasureand desire in this group encounter were met with playful explorations of power, pleasurea nd sensuality, disrupting the more frequent expressions of misogynistic disgust.P erhaps the young men did not have any personal sexualstoriesoffemalepleasurethat they wanted to share in thiscontext,orperhapsthey were humouring me -c reating acomic, sensual performancethat could be understood and enjoyed by someone from outside their community.
In both instances, these comic performances opened up space for the young men to go on to explore the relationship between gender and 'pace and power' as the young men were prompted by Graham and Simon to explore questions about gender,penetration, foreplay and the timing and sequencing of sexuale ncounters. This could suggest, as others have claimed, that asking questions about female sexualpleasureisdisruptive (McClelland and Fine 2008), bringing afrequently hiddentopic into the discussion. It was also the topic that the young men had told Graham and Steventhey most wanted to cover in the series of sexual health outreachsession. These werebrief conversations, however,and the space was too chaotic to fully engage in unpickingsome of the troubling gender and sexual norms at play 10 E. McGeeney S232 in this group or to create opportunities for the young men to listen to voices that talked of different kinds of personal experience from the 'quick beat' in the park.
Conclusion: creating safe spaces
Istarted this paper by asking whathappenswhen you put together agroup of young people and ask them to talk with each other and with practitioners about pleasure. Why would a researcher or practitioner do this in their work? And whatwould be the benefitsofdoing so for young people, for researchers and practitioners? This paper provides one exampleof what couldhappen when we engage in thiswork as researcher/practitioners with agroup of young, heterosexual, 'hard-to-reach'y oung men,d etailing the waysi nw hich what is possiblet os ay publically about sex and pleasurei ss hapedb yp eer and professional relationships, local and wider social contextsa nd inequalities ( Phoenix 2008). In this particularg roup, we can see that asking young people to talk about good sex createsa space for storytelling, protest and vivid expressions of disgust for the young female working-class sexual body. There is also space for fleetingly exploring sensuality, power and femalep leasure, as well as the young men's experiences of living with social exclusion in an area of higher social inequality.
Thedebates set out at the beginning of this paper suggestthat engaging young men and young women in criticald iscussion about sexual pleasurec ould create opportunities to explore more diverse, holistic and gender equitable accounts of gender and sexuality than those currently offered in mainstream media, pornography and sexuality education programmes. The data from this and other focus groups conducted as part of this study suggestthat inviting young peopletotalk about good sex and sexualpleasure can provide opportunities to move beyond limited public health agendasc oncerned with preventing sexually transmitted infections and unwanted pregnancies (Ingham 2005). The study suggests that there is an appetitea nd enthusiasm for engaging with this topic and the potential to use structured activities to explore arange of challengingand contested moral issues and experiences.
Thed ata alsos uggest, however,t hat that therea re considerable challenges for practitionersa nd young people in engaging in this work within peer groups and communities with high levels of social inequality. The adult-led modelo fw ork elicited rebelliousperformances from the young men, making it difficulttochallenge passionately held views or to engage in reflective discussion. For the young men, the hypermasculine performancea nd banter at play meant that they were unable to talk in this group setting about ar ange of emotional experiences and desires without facing ridicule from their peers. For all of us this meantworkingwith-rather than against -t he performative and humorous mode and usingp ersonal experience and comic performance as the starting point for discussion and challenge.
Although there wereopportunities for the group participants to challenge and question each other, therew ere also opportunities for collusion and reinforcemento fo ppressive social norms. Thethree professionals in the room struggled to build and maintaina'safe space' to conductt he work required; conventional ground rules, such as respecting and listeningtoeach other, maintaining each other's confidentiality and no discriminatory or oppressive language, were openly flouted and difficultt om aintain. Keen to engage and supportt hese vulnerable young men, the professionals in the room rewarded the young men with our attention, laughter and approval.
Whilst participating in this focus group, my main impression was not what was said but the desire, banter and hypermasculinity (Kehily and Nayak 1997) that were performed Culture, Health &S exuality 11 S233 for each other, for me, for the two male practitionersa nd for the frequently referenced audio recorder. The sheer noisea nd energy of this group was something that Ie njoyed. Ifound the young men funny and entertaining and when Ilisten back to the recording Ican hear myself laughing -s omething that In ow feel uncomfortable about whenIread the transcripts and explore the shockinglyl oud accounts of misogynistic disgust and the quieter, sadderstoryofsocial exclusion. In my analysisofthe focus group data Ihave tried to hold on to my initial impression of thisgroup and find ways of capturing this sense of performance, energy and fun in my analysis and reading this encounter as morethan just a 'sexist hangover' (Walkerdine 2011).
Focus group method worked to capture the ways in which social normsa re created, contested and embedded in youth sexualcultures (see Crossley 2002;Barbour 2009), but as ao ne-off group encounter it was unable to document the varied dimensions of these young men's lives or document changesi nt heir experiences,v alues or relationships. UnlikeSteven and Graham, Ihad no ongoing relationship with these young men and was unable to return to engage with them in alonger pieceofresearch/practice or findout about the multi-faceted dimensions of their lives and relationships.R eporting on this kind of encounter therefore runs the risk of further stigmatising the young men involved,capturing only the loud performance of misogynya nd disgust and unablet od ocument stories of potential vulnerability, care and respect.
Thepotential of our methods to elicitand potentially collude with storiesofpowerand oppression does not suggestthat we shouldshy awayfrom attempting to engage hard-toreach young men in the 'pleasure project'. It does, however,point to the need for sustained programmes of work in which it is possiblet oc reate the safe spaces requiredt om ove beyond the odd 'challenge here and there' (Lloyd 1997, 83) and engage in processeso f personal and politicalc hange.T hinkingb eyond the 'limitations of method',t his study suggests that this will require researcher/practitioners and young peoplet ob eo pena nd ready for the unpredictable, contested and highly emotional nature of these encounters (Gillies and Robinson 2010;Allen and Carmody 2012). Further, it pointstothe importance of participatory and community-based research/action projects (e.g. Cahill, Rios-Moore, and Threatts 2008) with groups of young men that are able to confront and engage with social inequality, grounded in acommunity and social justice agenda.
|
2016-05-12T22:15:10.714Z
|
2015-05-18T00:00:00.000
|
{
"year": 2015,
"sha1": "399f8a2952b8ab1191a0c2a721c87704e56fb7e8",
"oa_license": "CCBYNCND",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13691058.2015.1038586?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e3c69c61076d10bd358ee958b23053d39e75f56",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
239009423
|
pes2o/s2orc
|
v3-fos-license
|
Trade-off in perpendicular electric field control using negatively biased emissive end-electrodes
The benefits of thermionic emission from negatively biased electrodes for perpendicular electric field control in a magnetized plasma are examined through its combined effects on the sheath and on the plasma potential variation along magnetic field lines. By increasing the radial current flowing through the plasma thermionic emission is confirmed to improve control over the plasma potential at the sheath edge compared to the case of a cold electrode. Conversely, thermionic emission is shown to be responsible for an increase of the plasma potential drop along magnetic field lines in the quasi-neutral plasma. These results suggest that there exists a trade-off between electric field longitudinal uniformity and amplitude when using negatively biased emissive electrodes to control the perpendicular electric field in a magnetized plasma.
Introduction
Electric fields perpendicular to magnetic surfaces in plasmas are of importance to a variety of applications. In magnetic confinement fusion for instance, radial electric fields are known to play an important role on confinement [1], as observed both in tokamaks [2,3,4] and stellerators [5]. Confinement enhancement is in this case believed to be enabled by transport barriers induced by E × B sheared flows [6,7,8] in the presence of radial electric fields. Perpendicular electric fields also offer opportunities for the design of alternative confinement schemes in toroidal geometry such as the magnetoelectric confinement studied by Stix [9], or more recently the wave driven rotating torus [10,11]. In addition and beyond fusion, controlling perpendicular electric fields is essential for a growing number of applications of E × B configurations [12], and notably for the development of high-throughput plasma separation technologies [13,14,15].
Although waves hold promise to produce such perpendicular electric fields [16], most of the experimental effort towards perpendicular electric field control to date has relied on electrode biasing. In magnetic confinement fusion experiments, the high plasma temperature and density generally prohibit inserting electrodes in the plasma core, and biasing experiments thus typically involve edge biasing [17]. While this technique has been shown to be effective at affecting edge properties under certain conditions [18,19], the use of a single polarized surface at the edge -a biasing configuration known as a limiter -does not provide control over how the applied bias distributes itself across magnetic surfaces. The perpendicular electric field indeed remains an intricate function of the plasma properties [17]. Cooler and less dense plasmas, especially in open-field line geometries, open additional possibilities for biasing studies, and a broad array of electrode geometries have been used for the primary purpose of instabilities and turbulence suppression [20,21,22] and flow control [23,24,25,26,27,28]. For perpendicular electric field control, a biasing configuration of particular interest is endelectrodes, that is electrodes intercepting magnetic field surfaces. The basic idea here, as originally suggested by Lehnert [29,30], is that one could control the electric potential of individual magnetic field surfaces through the biases imposed on a set of end-electrodes. More specifically, the potential of a given magnetic surface is expected to be set by the applied bias on the electrode on which this magnetic surface terminates, allowing in principle in turn for perpendicular electric field control. While very attractive, the practicality of this scheme remains a question. Indeed, while control has been successfully demonstrated under certain conditions [31,32], other experiments reported more contrasted results (see Ref. [13] for a more extensive discussion of end-electrodes biasing experiments in linear geometry).
Conceptually, the ability to control the potential of a magnetic surface in a plasma through the bias applied on an end-electrode can be split into two problems: controlling the potential drop along magnetic field lines in the quasi-neutral plasma and controlling the potential drop across the non-neutral sheath formed in front of the biased electrode. So far, these two problems have mostly been treated separately. On the former, control along magnetic field lines in a quasi-neutral plasma is guaranteed in the limit that field lines are isopotential, which as noted by Lehnert in his original paper [29] can in principle be asymptotically approached using a large enough magnetic field. Restated through conductivities, this is equivalent to the limit of a zero perpendicular to parallel conductivity ratio µ = σ ⊥ /σ . Practically though, µ is finite, which implies that field lines are not strictly isopotential. Examining specifically this problem, it has recently been shown that the relative variation in potential along field lines in a plasma column of radius a and length L is about τ = L/a √ µ [33]. This suggests that a necessary condition for potential control along field lines is τ 1. On the latter, Liziakin et al. showed considering the sheath formed in front of a negatively biased electrode that the combination of the plasma perpendicular conductivity σ ⊥ and the ion saturation current sets a lower limit on the minimal plasma potential φ p < 0 one can expect from applying a negative bias φ e < φ p [34]. Building on this finding, the same authors recently showed that the addition of thermionic emission from a negatively biased electrode can lower further the plasma potential φ p at the expense of the voltage drop across the sheath φ p − φ e [35], which is consistent with earlier theoretical work [36] and observations [37].
In this paper, we consider these two problems in a unified model with the goal of highlighting the overall limits on perpendicular electric field control from negatively-biased end-electrodes, and examine in particular the effect of thermionic emission. In Section 2, we briefly introduce our model. In Section 3, the sheath models developed by Liziakin et al. [34,35] are first used to identify what the limits on potential control at the sheath edge are. In Section 4, these insights from sheath dynamics are then coupled into models for the potential distribution in a quasi-neutral magnetized plasma to highlight the influence of the sheath, and theoretical predictions are compared to numerical simulations. In Section 5, the main results are summarized.
Model description
The configuration studied in this work is illustrated in figure 1. It consists in a symmetrical linear machine with single full disks electrodes (sometimes referred to as button electrodes) terminating axially a magnetized plasma column. We note L the inter-electrode distance and r e and r g the radii of the biased electrode and grounded vacuum vessel, respectively. We further limit ourselves in this work to negative biases imposed on the disk electrodes, but consider both cold and hot surfaces to highlight the effect of thermionic emission. This simpler biasing configuration is used here to underline some of the key features expected in the more complex segmented concentric ring electrodes configuration proposed by Lehnert [29]. The plasma filling this volume is assumed to be produced by an external source (e .g. radio-frequency or ECR) which is not modeled in this work, and we further assume that the biased electrodes do not affect the plasma properties other than through the plasma potential. In addition, we consider the plasma to be uniform and quiescent, that is that density and temperature gradients, as well as possible instabilities, are neglected. Away from the sheaths formed in front of the biased electrodes, the plasma is therefore modeled as a uniform anisotropic media characterized by parallel and perpendicular conductivities σ and σ ⊥ . Finally, since our system is symmetrical, only the left-hand side of the domain is studied, that is [0, Having perpendicular electric field control in mind, a key element of the plasma potential response as a function of the applied bias φ e and the plasma parameters (through the conductivities) is the evolution of potential along magnetic field lines. For the uniform magnetic field considered here this means the variation of φ(r, z) at constant radius r 0 as illustrated in figure 2. To facilitate interpretation, we introduce the shorthands φ sh (r) = φ(r, −L/2) and φ mid (r) = φ(r, 0) for the plasma potential radial profile at the sheath edge and in the mid-plane, respectively. With this notation the voltage drop across the sheath and along field lines in the quasi-neutral plasma are then simply and Finally, in an effort to ease comparison between experiments across a broad range of conditions, we work in this study with dimensionless variables and define for all electric potential quantities φ their dimensionless analog ψ through ψ = φ/T e with T e in eV.
Effect of the sheath on the plasma potential at the sheath edge
Inserting a biased electrode in contact with a plasma leads to the formation of a sheath, that is to a non-neutral region a few Debye length thick connecting the bulk plasma to the electrode [38] (see figure 2). For biased electrodes of sufficiently small surface area, the bulk plasma parameters, and in particular the plasma potential, are to zero-th order unaffected by the electrode bias. This is an essential hypothesis in probe theory [39].
On the other hand, for large enough surface areas, the bulk plasma potential can be affected by the applied bias [38]. The thin sheath region is then expected to control to which extent the applied bias potential is passed along field lines into the plasma bulk.
3.1. Sheath transfer function for the potential: saturated and non-saturated regimes A first picture of how the plasma potential at the sheath edge relates to the applied bias can be obtained from the model proposed by Liziakin et al. [34]. In this model the potential φ(r, z) in the plasma column shown in figure 1 is assumed to be independent of z, which corresponds to the limit µ = σ ⊥ /σ → 0. In this case one simply gets φ(r, z) = φ sh (r). This model further assumes that the potential is constant in the shadow of the electrode (that is for r ≤ r e ), allowing to model the discharge as a constant radial current I flowing from r e to r g across magnetic surfaces. The influence of this last simplifying hypothesis is examined in detail via a more complete model in Section 3.3. Under these assumptions, the plasma potential in the electrode's shadow φ p = φ(r ≤ r e , z) is simply R ⊥ I, where the perpendicular resistance R ⊥ opposing the current has been obtained by integrating between r e and r g the incremental resistance associated with the annular region of length L located between r and r + dr [34]. Note here that R ⊥ is the perpendicular resistance associated with half the plasma column length, consistent with the fact that we consider I as the current on one axial endelectrode. This plasma potential φ p minus the voltage drop across the sheath ∆ sh φ must be equal to the applied bias φ e , as illustrated in figure 3. In addition, current continuity requires for this current I to be equal to the current drawn at the biased electrode.
Considering here an ion sheath formed in front of a hot negatively biased electrode and counting positively particle fluxes leaving the electrode, one gets I = I e −I is −I eth where electron, ion and thermionic electron currents have respectively been obtained from the surface integrated current densities Trade-off in perpendicular electric field control using negatively biased emissive end-electrodes6 Here n is the plasma density, c s = eT e /m i is the ion sound speed, Λ = ln( 2m i /(πm e )) is a sheath parameter, T W is the temperature of the electrode, W is the work function of the material and A G is the Richardson's constant taken here equal to 6.0 × 10 5 A · K −2 · m −2 [40]. Moving to dimensionless variables, the normalized current flowing through the plasma hence writes with Ξ = j eth /j is a dimensionless parameter quantifying thermionic emission with respect to the ion saturation current. For an applied bias ψ e ≤ ψ p the current I is negative and reaches a minimum equal to − (I is + I eth ) for sufficiently large ψ p − ψ e . Note that we chose here to ignore for simplicity possible dependencies of j eth on φ p through the Schottky effect [41], as well as possible dependencies of j is on φ p through a modification of the Bohm velocity at the sheath boundary in the presence of thermionic emission [42]. Plugging Eq. (5) into Ohm's law across the plasma resistance φ p = R ⊥ I and moving to dimensionless variables finally yields a transcendental equation for the normalized plasma potential where we have defined χ = T e I is R ⊥ . Eq. (6) is identical to that obtained by Liziakin et al. [34], other than for the choice of using dimensionless variables and the addition in this work of thermionic emission. Considering first the case without thermionic emission (Ξ = 0), figure 4 shows the evolution of ψ p as a function of ψ e for different values of χ as predicted by Eq. (6). For small biases applied at the electrode ψ e , or a sufficiently small value of χ, the plasma potential is seen to follow the applied bias, that is ψ p = ψ e + Λ. In this case the plasma potential is hence controlled by the electrode bias. The voltage drop across the sheath is then small, ∆ sh ψ = Λ, and the current drawn at the electrode is negligible compared to I is . For reasons that will become clear in the next paragraph, we refer to this regime as the non-saturated regime. For larger negative biases for a given value of χ, figure 4 shows that the plasma potential progressively deviates from the floating solution ψ p = ψ e + Λ until it reaches a minimal value ψ p = χ −1 . This behaviour is consistent with the fact that the current I < 0 then approaches its minimum value −I is . At this point the plasma potential ψ p is no longer controlled by the applied bias ψ e , and any further decrease in applied bias ψ e is entirely recovered in the voltage drop across the sheath ∆ sh ψ which grows as |ψ e | − χ −1 . Reflecting the property of the current in this regime, we refer to it as saturated or current-limited regime. In contrast with the non-saturated regime, the minimum plasma potential is indeed here set by the maximum current drawn at the biased electrode. The dimensionless parameter χ −1 can then be interpreted as a measure of the maximum radial voltage drop the plasma column can sustain. From the definition of R ⊥ , one gets The transition from non-saturated to saturated regime, that is from a plasma potential controlled by the electrode bias to a plasma potential controlled by plasma parameters, takes place for |ψ e |χ ≥ 1. The boundaries in physical operating parameters space (densities, temperatures, bias) for this regime transition can be obtained by developing the parametric dependencies of j is and σ ⊥ for the plasma conditions under consideration, as it will be done in Section 4.
Additional control offered by thermionic emission
Examining now specifically the effect of thermionic emission, Eq. (6) shows that the plasma potential in the saturated regime, that is the minimal plasma potential no matter how negative the applied bias, writes Compared with the case absent thermionic emission studied above, one finds that its amplitude is (1+Ξ) times larger. This result is simply the consequence that the discharge current is 1 + Ξ larger when taking into consideration thermionic emission, while the perpendicular plasma resistance R ⊥ remains the same. Since ψ sat p by definition coincides with the applied bias ψ e below which the transition from non-saturated to saturated regime occurs, the above result suggests a larger range of accessible plasma potential for a given χ (that is for a given set of plasma parameters). Put differently, as illustrated in figure 5, increasing thermionic emission for a given bias and a given χ allows transitioning from a saturated regime to a non-saturated regime, and therefore regaining control over the plasma potential. In the process the plasma potential decreases and progressively approaches the applied bias as thermionic emission increases, consistent with experimental observations [37].
Note that while the added control offered by thermionic emission suggests operating with large thermionic currents I eth , there is practically a limit to how large this current can be. Indeed, past a certain value a virtual cathode is expected to form in front of the electrode. This will in turn naturally limit the thermionic current reaching the bulk plasma, and thus the thermionic contribution to the discharge current. The value of I eth (and thus of Ξ) for which this virtual cathode is expected to form can be derived analytically by solving Poisson's equation in the sheath region [36].
Radial potential profile
Up to this point we have only examined the relation between the applied bias φ e and the plasma potential in the electrode shadow φ p . This is because in the model introduced in Section 3.1 the potential is assumed constant for r ≤ r e and the current constant for r > r e . Integration of Ohm's law with the incremental resistance given in Eq. (3) hence leads to a simple logarithmic profile for φ sh connecting φ p for r < r e to the ground at r = r g . A more physical model has recently been proposed by Liziakin et al. [35] noting that the radial current I(r) at any radius must be equal in steady state to the surface integrated current density for all radii r < r, that is the current density through the sheath. By implicitly assuming that there is no current source past the outer radius of the electrode r e , the requirement for a constant potential in the electrode shadow can then be lifted, and the plasma potential obtained by integrating from the ground reference at r = r g to any radius r the local Ohm's law dφ sh (r) = I(r)dR(r). Plugging in Eq. (9) and going back to dimensionless variables yields a linear integro-differential equation for ψ sh , Here F is a function that depends on the plasma parameters through T e and σ ⊥ , on the plasma composition through the sheath parameter Λ, on the applied bias ψ e , on the thermionic current parameter Ξ and on the geometric parameters r e , r g and L. In Ref. [35], Eq. (11) was solved numerically for spatially dependent plasma parameters (plasma density and electron temperature) measured in an experiment, as a way to validate potential profile predictions against experimental data. Here we similarly solve Eq. (11) using a shooting method but instead use these results to highlight some characteristic properties of the radial potential obtained for a uniform plasma (i. e. uniform σ ⊥ ), as shown in figure 6. Another important distinction is that Liziakin et al.
focused specifically in Ref. [35] on the case where σ ⊥ depends linearly on dψ sh (r)/dr by assuming that ions drift with velocity B 0 −1 dφ sh (r)/dr in a background static neutral gas. This corresponds to the limit of supersonic ion rotation Ωr v thi . In contrast we purposely do not consider here for the moment a particular mechanism for cross-field conductivity, but we do assume that σ ⊥ does not depend on φ nor its radial derivatives. We simply note here that this hypothesis may be representative of either ion-neutral collisions in the slow rotation limit Ωr v thi , or of a different conductivity mechanism (e. g. electron-neutral collisions as examined by Liziakin et al. in an earlier study [34]). Figure 6. Normalized plasma potential ψ sh (r) |ψ sh (0)| radial profile for different values of χ, and normalized current density radial profile (inset). Green and red curves highlight the potential profiles obtained for the non-saturated and saturated regimes, respectively. The black dots for r ≤ r e illustrate a parabolic radial dependence, whereas the dotted black curve near r g represents a logarithmic radial profile.
Looking first at the normalized current density, one recovers the non-saturated and saturated regimes for respectively small and large values of χ. Indeed, consistent with the simpler model used in Sections 3.1 and 3.2, the current density on the electrode j (r) is observed to be uniform and equal to its maximum value (in absolute value) in the saturated regime, whereas it is negligible in the non-saturated regime. However, we also observe that these two regimes are now separated by a group of curves obtained for intermediate values of χ for which the current goes from near zero on-axis to its saturated value at the outer edge of the biased electrode. This shows that saturation is actually a local phenomenon. By analogy with the nomenclature introduced earlier, we refer to this regime as partially saturated. A closer examination reveals that saturation first appears at the outer edge of the biased electrode and progressively moves radially inward with increasing χ until full saturation has been reached, which is to be expected for a monotonically increasing radial potential profile.
Moving on now to the normalized plasma radial profile and starting from small value of χ, that is in the non-saturated regime, one observes a nearly constant profile in the shadow of the electrode (r < r e ). This result can be explained from Ohm's law and the fact that the current drawn at the electrode is in this case very small. In this regime in the electrode shadow one simply finds ψ p = ψ e + Λ. This same profile holds as χ increases up until partial saturation begins at the outer edge of the electrode. As the current then becomes larger (in amplitude), Ohm's law predicts an increase of ψ sh . Since, as showed above, partial saturation move radially inward with χ, the position where ψ sh (r) begins to deviate from its constant on-axis value is also observed to move radially inward. Finally, once the fully saturated regime has been reached, a new profile sets in and holds for arbitrary large value of χ.
Because the saturated regime is characterized by a saturated current at the electrode, the flux of electrons reaching the electrode is then by definition null. This property allows for further analysis. Indeed, Eq. (11) then becomes a simple ordinary differential equation whose solution for r ≤ r e writes ψ sat sh (r) = − Here one recognizes that the first term in brackets on the right hand side is ψ sat sh (r e ) given in Eq. (8), which was assumed to be the constant potential found in the electrode shadow in Sections 3.1 and 3.2. In contrast it is found here that the potential in the electrode shadow actually exhibits a parabolic radial dependence. This is noteworthy insofar that this profile leads to a constant angular E × B drift frequency, and thus to solid body rotation in the shadow of the electrode assuming crossed-field drift is the dominant contribution to rotation. It should also be noted here that the fact that a parabolic profile is obtained in Eq. (12) instead of the r 3/2 profile derived by Liziakin et al. [35] is the consequence that, as mentioned above, σ ⊥ is assumed in this work independent of dφ sh (r)/dr.
Finally, one notes that all profiles feature a logarithmic radial dependence past the outer radius of the electrode r e , no matter the regime. This is the direct consequence of the fact that we assumed zero axial current past r e , so that the radial current I(r) in Eq. (9) is constant for r > r e . In this case the potential simply writes ψ sh (r > r e ) = I(r e ) r rg dr πLσ ⊥ r (13) which indeed yields the observed logarithmic radial dependence, consistent with Ref. [34]. Profiles then only differ through their value at r = r e , which is a function of the total current emitted by the electrode I(r e ).
To summarize our findings, we have shown in this section that the ability to pass the negative applied bias ψ e unchanged (up to the sheath parameter Λ) along field lines into the plasma bulk is conditioned upon the smallness of the quantity |ψ e |χ(1 + Ξ) −1 . The value of this quantity can be obtained for actual plasma parameters through Eq. (7) and the definition of Ξ. For values of ψ e such that this quantity is greater than 1, the regime is said to be saturated. The plasma potential then no longer varies with ψ e , ψ p = ψ sat p , whereas the sheath drop ∆ sh = ψ p − ψ e grows as |ψ e | − (1 + Ξ)χ −1 . These results point to the value of thermionic emission to increase potential control. Finally, a more refined model of currents in the system shows that the saturation phenomenon is local, starting at the outer edge of the biased electrode and moving radially inward until the entire surface of the biased electrode is collecting maximum current density.
Effect of the sheath on the voltage drop along field lines
Controlling the potential drop across the sheath is essential for perpendicular electric field control through end-electrodes biasing but it is not enough. One indeed must also ensure that the potential does not vary significantly along field lines in the quasineutral plasma. Assuming a monotonic variation of φ(r, z) along z, this condition translates into the smallness of ∆ φ(r) = φ mid (r) − φ sh (r). Considering only the quasineutral plasma (i. e. neglecting sheath effects), Gueroult et al. [13] showed that the normalized voltage drop along field lines, that is ∆ φ(r)/φ sh (r), is small under the condition τ = L/r g σ ⊥ /σ 1. This condition, however, does not provide a measure of ∆ φ(r) since it depends on φ sh (r) which is governed by the sheath. In this section we revisit this question in light of the insights in sheath physics obtained in Section .3.
Insights from theory
Ohm's law at the sheath writes Although local, this results suggests that the voltage drop along field lines in the quasineutral plasma will depend on the current collected at the electrode. In fact, one shows as done in Appendix A that in the particular case that the current density through the sheath does not depend on the radius, which is precisely verified in the saturated regime, this result can be generalized to obtain to lowest order in τ the global result This result shows that, at least in the saturated regime, the voltage drop along field lines in the quasi-neutral plasma ∆ ψ is expected to grow with thermionic emission. Taking a step back, we have seen in Section 3 that thermionic emission can help achieve control over the plasma potential over a larger range of operating conditions compared to the case of a cold biased electrode. In has notably been shown to help limit the voltage drop which exists across the sheath in the case of a strongly negative bias such that |ψ e |χ > 1. On the other hand, Eq. (15) now suggests that in these same saturated conditions thermionic emission would lead to a greater voltage drop along field lines. To better understand this apparent trade-off and to which extent thermionic emission is desirable for perpendicular electric field control, we now turn to numerical simulations.
Numerical simulations
While the anisotropic Laplace equation obtained from the combination of Ohm's law for a static background neutral j = σE and charged conservation ∇ · j = 0 allowed for analytical solutions when imposing Dirichlet conditions [33], the use of more physical flux conditions requires numerical modeling. Eq. (16) is thus solved here using finite differences in the interior of the domain shown in figure 7 and implementing flux conditions at the electrodes in a way very similar to that employed by Von Compernolle et al. [43]. Specifically, noting z 0 = −L/2 the axial position of the left boundary of the domain, the ion-sheath in front of the electrode is modeled via the non-linear Neumann condition ∂ψ(r, z) ∂z In addition, we model the axial boundary in the electrode plane z = z 0 between r e and r g by enforcing ∇ 2 r ψ(r, z 0 ) = 0, ground potential in r g and a potential in r e computed from the flux condition Eq. (17). This leads to the Dirichlet condition ψ(r, z 0 ) = ψ(r e , z 0 ) ln(r r g ) ln(r e r g ) (18) which is consistent with the radial profile derived in Sec. 3.3. The rest of the domain's boundary conditions, as shown in figure 7, are straightforward and imposed from symmetry or zero potential.
To offer a more physical illustration of the effect of thermionic emission, we discuss here the results obtained for a set of dimensional geometric and plasma parameters (see Table 1) which correspond loosely to the conditions expected in a RF or Helicon laboratory plasma.
Focusing first on the on-axis behavior, figure 8 confirms as expected from the value of |ψ e |χ 1 in Table 1 that the regime is saturated for zero thermionic emission (Ξ = 0). The current is indeed equal to its maximum value (in absolute value) while the voltage drop across the sheath ∆ sh ψ is significant. In these conditions the voltage drop along field lines ∆ ψ is very small. As the thermionic current is increased, that is as Ξ increases, the voltage drop across the sheath ∆ sh ψ(0) decreases, as expected from the |ψ e | − (1 + Ξ)χ −1 scaling in the saturated regime. Meanwhile, the voltage drop along field lines increases, and numerical simulation results are observed to be in very good agreement with the theoretical prediction obtained in Eq. (15). This trend continues up until Ξ is large enough to lead to the transition from a saturated to a non-saturated regime on-axis. This transition is indicated by the sudden drop in normalized current in figure 8. Depending on plasma and geometrical parameters, the voltage drop along field line can already be larger than the voltage drop through the sheath when the regime transition takes place. In the non-saturated regime, both the voltage drop across the sheath and the voltage drop along field lines are observed to decrease with Ξ. Additional insights can be gained by examining how this behavior depends on radius, as shown in figure 9. Starting again from zero emission (Ξ = 0), one finds a uniform behavior across the radius, consistent with the fact that the regime is fully saturated. This is confirmed by the result thatj = j (r)/j max = 1 over the entire electrode. As thermionic emission is turned on and Ξ increases, the voltage drop along field lines ∆ ψ is first observed to grow nearly uniformly in the electrode shadow (r < r e ). As Ξ is increased further, the current iso-contours confirm that the transition from saturated to non-saturated regime occurs first on-axis, and progressively moves radially outward, as predicted in Section 3.1. This radial current density non-uniformity is accompanied by a similar variation in the voltage drop along field lines, consistent with Eq. (15). For a given Ξ the voltage drop along field lines ∆ ψ(r) hence grows with r in this regime. An interesting result revealed by figure 9, contrasting with on-axis results observed in figure 8, lies in the behavior in the non-saturated regime. Indeed, while both the voltage drop along field lines and the current density through the sheath on-axis were seen to drop rapidly with Ξ once the non-saturated regime has been reached, one observes here that the voltage drop can actually grow with Ξ in the outer region of the biased electrode, and that even in the non-saturated regime. The origin for this apparently counter-intuitive result is to be found in the fact the while the normalized current densityj falls off rapidly with Ξ on-axis, the decrease is much slower near r = r e . As a result the absolute parallel current density through the sheath can still locally grow with Ξ near the outer radius of the electrode. As shown in Appendix A this growth in parallel current can explain the observed growth of the voltage drop along field lines even in the non-saturated regime on the condition that the radial gradient length for the current is sufficiently large compared to the electrode radius, which is indeed what is observed in figure 9.
Returning to our motivation of assessing the potential drop along field lines, in particular in the case of thermionic emission, these results show that strong emission can lead to potential variations along field lines in the quasi-neutral plasma of the order of a few electron temperature. While moderate, these variations could be sufficient to affect perpendicular field homogeneity along field lines. This is particularly true when considering that, as discussed next, these variations may be even larger for a different set of operating conditions and geometric parameters. In addition, it is found here that the axial voltage drop can vary by a comparable amount perpendicular to field line across the radius of the biased electrode. This, combined with the possibility of a radially dependent voltage drop across the sheath, could lead to potential differences between field lines that intersect the same electrode, hindering the ability to control perpendicular electric fields.
Parametric dependencies
While the results presented in figure 8 and figure 9 were obtained for specific plasma and geometric parameters, they can be used in combination with the analytical results obtained in Sec. 4.1 to offer a more general analysis. Indeed, although Eq. (15) already provided insights into the effect of thermionic emission, a deeper understanding can be gained by developing the parametric dependencies of conductivities.
Expressions for parallel and perpendicular conductivity in a magnetized plasma are in general non-trivial and require solving a full set of fluid equations [44,45,46] but one can often reasonably use simplified formulas within an appropriate operating parameter space. For instance, electron-neutral and ion-neutral collisions are found to be main contribution to respectively parallel and perpendicular conductivity across a range of low-temperature partly ionized and magnetized plasma laboratory experiments [35]. In this case the perpendicular conductivity reduces to the Pedersen conductivity while the parallel conductivity is σ = e 2 n m e ν en (20) with ν en and ν in the electron-neutral and ion-neutral collision frequency, respectively.
Here we simply take ν αn = σ 0 N n T α /m α with α = e, i, m α the particle mass, σ 0 a fiducial cross section and N n the neutral density, and ignore for simplicity the temperature dependence of σ 0 . Plugging Eq. (20) into Eq.(15) yields ∆ ψ sat en ∝ Meanwhile, plugging Eq. (19) into Eq. (7) leads to Comparing Eq. (21) and Eq. (22), one finds that in regimes where the simplified conductivities Eqs. (19) and (20) hold a decrease in the magnetic field intensity B 0 will increase the range of operating conditions leading to saturation. In particular, it will require a larger Ξ to transition from saturated to non-saturated operation. Looking at figure 9, this means moving the transition region higher up. Meanwhile, since the voltage drop along field lines does not depend on B 0 but does depend on Ξ, the expansion of the saturated region means that higher voltage drop along field lines will be observed in this regime. From Eqs. (19) and (20), a similar growth is expected for heavier ion species since χ in ∝ m i while ∆ ψ sat en ∝ 1/ √ m i , or for a smaller electrode radius r e . The generic results Eq. (7) and Eq.(15) can be similarly used to explore other regimes. For instance, if ones considers a denser plasma where parallel conductivity is now governed by Spitzer conductivity, with ν ei the electron-ion collision frequency then ∆ ψ sat ei ∝ This last result shows that in these conditions an increase of the plasma density n will lead to an increase in the voltage drop along field lines in the saturated regime. On the other hand the value of Ξ for which transition occurs will decrease with n since Ξ ∝ m i /T e /n for j eth j is . As a result the bias for which transition occurs will also decrease with n.
Summary
In this study the problem of plasma potential control from negatively biased emissive electrodes positioned at the axial ends of a magnetized plasma column is investigated through a combination of theory and numerical simulations.
By limiting the radial current flowing from the grounded vacuum vessel to the negatively biased electrode through the plasma, the ion sheath formed in front of the electrode is shown to control how much of the applied bias is passed on along field lines to the plasma potential. Besides geometrical parameters the minimum achievable plasma potential is controlled by the ion saturation current and the perpendicular plasma conductivity. When the applied bias amplitude is smaller than the voltage constructed from the plasma perpendicular resistance and the ion saturation current at the biased electrode, the plasma potential is found to be about the applied bias. In contrast, when the applied bias amplitude is larger than this voltage the plasma potential is no longer controlled by the applied bias, and the voltage difference is found across the sheath. These two regimes are referred to as non-saturated and saturated, respectively.
By increasing the current flowing through the plasma, thermionic emission from the biased electrode makes it possible to lower the minimum achievable plasma potential for a given set of plasma conditions and geometric parameters. This in turn allows accessing a broader range of plasma potential, or in other words improves plasma potential control through electrode biasing. On the other hand, thermionic emission from the biased electrode is shown to lead to a larger plasma potential variation along magnetic field lines in the quasi-neutral plasma. Scaling laws for this potential variation along field lines are derived analytically through the parametric dependencies of the plasma parallel and perpendicular conductivities, and verified against numerical simulations. Quantitatively, one finds plasma potential variations along field lines of a few electron temperature for typical RF or Helicon laboratory experiments under strong thermionic emission.
Although such potential variations along field lines may not in themselves be a showstopper, they suggest that the possibility of using thermionic emission to minimize the voltage drop across the sheath and hence to maximize control over the plasma potential at the sheath edge should be consider carefully, and that beyond the practical limit resulting from the formation of a virtual cathode. This is particularly true in the non-saturated regime where a radially dependent axial voltage drop could lead to a rotation of the electric field vector.
where the O (τ 2 ) term comes from higher order terms involving ∂j /∂z. Integration between z 0 and 0 with the boundary conditions φ(r, z 0 ) = φ sh (r) and φ(r, 0) = φ mid (r) yields to lowest order in τ In the non-saturated regime, the contribution of the third derivative is no longer zero but instead scales as (τ /L ∇r j ) 2 . This suggests that an analog to Eq.
|
2021-10-18T01:16:02.481Z
|
2021-10-15T00:00:00.000
|
{
"year": 2021,
"sha1": "a37c7a9acb2fb3de5227042e424ec7cb42a760a7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a37c7a9acb2fb3de5227042e424ec7cb42a760a7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
237326711
|
pes2o/s2orc
|
v3-fos-license
|
Bioinformatics Investigations of Universal Stress Proteins from Mercury-Methylating Desulfovibrionaceae
The presence of methylmercury in aquatic environments and marine food sources is of global concern. The chemical reaction for the addition of a methyl group to inorganic mercury occurs in diverse bacterial taxonomic groups including the Gram-negative, sulfate-reducing Desulfovibrionaceae family that inhabit extreme aquatic environments. The availability of whole-genome sequence datasets for members of the Desulfovibrionaceae presents opportunities to understand the microbial mechanisms that contribute to methylmercury production in extreme aquatic environments. We have applied bioinformatics resources and developed visual analytics resources to categorize a collection of 719 putative universal stress protein (USP) sequences predicted from 93 genomes of Desulfovibrionaceae. We have focused our bioinformatics investigations on protein sequence analytics by developing interactive visualizations to categorize Desulfovibrionaceae universal stress proteins by protein domain composition and functionally important amino acids. We identified 651 Desulfovibrionaceae universal stress protein sequences, of which 488 sequences had only one USP domain and 163 had two USP domains. The 488 single USP domain sequences were further categorized into 340 sequences with ATP-binding motif and 148 sequences without ATP-binding motif. The 163 double USP domain sequences were categorized into (1) both USP domains with ATP-binding motif (3 sequences); (2) both USP domains without ATP-binding motif (138 sequences); and (3) one USP domain with ATP-binding motif (21 sequences). We developed visual analytics resources to facilitate the investigation of these categories of datasets in the presence or absence of the mercury-methylating gene pair (hgcAB). Future research could utilize these functional categories to investigate the participation of universal stress proteins in the bacterial cellular uptake of inorganic mercury and methylmercury production, especially in anaerobic aquatic environments.
Introduction
Mercury is a trace metal, which in both its organic (methyl mercury) and elemental form (Hg) is known to be highly toxic to all life forms [1,2]. Exposure to mercury can occur through inhalation of toxic elemental mercury vapors [3], through dietary sources and non-dietary sources [4,5]. The presence of methylmercury in aquatic environments and marine food sources is of global concern [2,6]. In the United States, mercury-impaired waterbodies have concentrations of mercury in fish tissue that have exceeded 1.0 mg/kg total mercury [7][8][9][10]. The chemical reaction for the addition of a methyl group to inorganic mercury occurs in diverse bacterial taxonomic groups including the Gram-negative, sulfatereducing Desulfovibrionaceae family of the delta subdivision of proteobacteria [11][12][13]. The genera in the Desulfovibrionaceae family include Bilophila, Desulfobaculum, Desulfocurvibacter, Desulfocurvus, Desulfohalovibrio, Desulfovibrio, Halodesulfovibrio, Humidesulfovibrio, Lawsonia, and Pseudodesulfovibrio [14]. The availability of whole-genome sequence datasets for some members of the Desulfovibrionaceae [15][16][17] presents opportunities to understand the microbial mechanisms that contribute to methylmercury production in water bodies.
The genomes of bacteria that are able to methylate mercury have a two-gene cluster, hgcA and hgcB, respectively encoding a corrinoid protein and a ferredoxin [12,18]. Bacteria containing the hgcAB gene pair occur in a wide range of habitats including extreme natural environments such as coastal dead zones, deep-sea anaerobic sediments, thawing permafrost soils, and hypersaline ecosystems [19]. A list of Desulfovibrionaceae genomes predicted to be mercury methylators according to the presence of the hgcAB gene pair are available on the data page of the Biogeochemical Transformations at Critical Interfaces project of the Oak Ridge National Laboratory's Mercury Science Focus Area [12,20]. We are interested in genes that encode the universal stress protein (USP) domain (Protein Family (Pfam) Identifiers: Usp, pfam00582 or PF00582), since they aid bacteria in (1) responding to extreme conditions; and (2) the formation as well as maintenance of adherent bacteria communities termed biofilms [21][22][23][24]. Biofilms can methylate mercury (Hg) at higher rates than unattached bacteria and are a location for mercury methylation in the environment [11]. The USP gene count per genome has not been compiled for the Desulfovibrionaceae genomes to enable comparisons between genomes of mercury methylators and those that are not mercury methylators. This research article bridges the knowledge gap on USP gene content of the Desulfovibrionaceae genomes.
The universal stress proteins can be composed of one USP domain; two USP domains in tandem; or one or two USP domains together with other functional domains including transporters, kinases, permeases, transferases, and bacterial sensor proteins [23,25]. The three-dimensional structure of universal stress proteins provides evidence for associated molecular functions, biological processes and cellular components. Adenosine-5 -triphosphate (ATP) functions as coenzyme as well as energy molecule [26] and its binding to USPs provides a basis for the functional categorization of USPs [27]. The ATPbinding amino acid motif of G2XG9XG(S/T) categorizes the USP domain into two groups: ATP-binding and non-ATP-binding [27,28]. The categorization of USP domains of the mercury-methylating Desulfovibrionaceae will allow for the new bioinformatics investigations and the design of experiments to determine the participation of USPs in bacterial mercury methylation.
Genes for universal stress proteins were predicted from the genome sequencing of Desulfovibrionaceae members, including those that methylate mercury and inhabit extreme environments [29][30][31][32]. Thus, the aim of the research reported here was to investigate the protein sequence features encoded by the predicted universal stress protein sequences of mercury-methylating Desulfovibrionaceae. We have focused our bioinformatics investigations on protein sequence analytics by developing interactive visualizations to categorize Desulfovibrionaceae universal stress proteins by protein domain composition and functionally important amino acids (functional sites).
We applied bioinformatics resources and developed visual analytics resources to categorize a collection of 719 putative universal stress proteins predicted from 93 genomes of Desulfovibrionaceae. We identified a subset of 651 Desulfovibrionaceae universal stress protein sequences into 488 sequences with one USP domain and 163 with two USP domains. Additionally, the sequences were categorized by (1) the presence of ATP-binding functional sites and (2) the presence of mercury methylation gene pair in the bacterial genome. The findings provide foundations to investigate the participation of universal stress proteins in the bacterial cellular uptake of inorganic mercury and methylmercury production, especially in anaerobic aquatic environments.
Overview-Applying Bioinformatics Resources and Developing Visual Analytics Resources
The flowchart describing the stages of the bioinformatics investigations is presented in Figure 1. The U.S. Department Joint Genome Institute's (JGI) Integrated Microbial Genomes and Microbiomes (IMG/M) system [33] was the key bioinformatics resource for collecting and interacting with protein sequence data. We also applied the Batch Web Conserved Domain Search (CD-Search) Tool of the National Center for Biotechnology Information (NCBI) [34] to obtain the number and the protein domain composition as well as the amino acid functional sites. Overview of bioinformatics data investigations of universal stress proteins relevant to bacterial mercury methylation. The process integrates bioinformatics resources and visual analytics resources to categorize universal stress proteins by protein features such as protein domain composition (count and type) as well as the presence of the ATP-binding motif.
We typically constructed the results from bioinformatics tasks into datasets that serve as data sources for visual analytics tasks [35]. Bioinformatics tasks that we performed include searching for genes with specific annotation as well as predicting the conserved domains and functional amino acids. The visual analytics tasks include designing interfaces to support interaction, analysis and representation of datasets from bioinformatics We typically constructed the results from bioinformatics tasks into datasets that serve as data sources for visual analytics tasks [35]. Bioinformatics tasks that we performed include searching for genes with specific annotation as well as predicting the conserved domains and functional amino acids. The visual analytics tasks include designing interfaces to support interaction, analysis and representation of datasets from bioinformatics tasks [35]. We implemented interactive visualizations (visual representations) in version 2020.4 of Tableau (Tableau, Seattle, WA, USA), a visual analytics software The framework for interaction design for complex cognitive activities with visual representations guided our designs of the interactive visual representations [36,37]. This interaction design framework defines the type of visualizations (e.g., enclosure tables, box plots, and bar plots) and action patterns (e.g., filtering, selecting and transforming) that promotes complex cognitive activities such as decision making, planning, knowledge discovery and understanding [37].
Retrieval of Genome List, Gene List and Protein Sequences annotated with Universal Stress Protein Domain
We applied the Find Genomes and the Find Function tools of the IMG/M system to retrieve, respectively, lists of Desulfovibrionaceae genomes and genes annotated with pfam00582. We exported the genome lists and gene lists with annotations from IMG/M into text files for visual analytics tasks. The retrieval of the genome list and gene list in IMG/M generates an Analysis Cart that includes functionalities for exporting protein sequences (in FASTA format). A text file with protein sequences predicted from genes was the input to the Batch Web Conserved Domain Search (CD-Search) Tool of the National Center for Biotechnology Information (NCBI) [34].
Prediction of Protein Domain Composition and Functional Amino Acid Sites
According to the amino acid sequence of the ATP-binding universal stress protein (MJ0577) of Methanocaldococcus jannaschii, there are 12 functional sites where amino acids contact the ATP molecule [38]. Thus, we submitted to a bioinformatics resource (NCBI Web Batch CD-Search Tool) a text file containing FASTA formatted amino acid sequences of the Desulfovibrionaceae proteins predicted by the IMG/M system to contain the pfam00582 (Usp) domain. We also submitted to the NCBI Web Batch CD-Search Tool a set of 3470 protein sequences predicted from the genome of Desulfovibrio desulfuricans ND132. This additional prediction approach could identify potential universal stress proteins that we did not retrieve with the IMG/M pfam00582 function keyword search. It also demonstrates that our categorization process by functional features can be applied beyond the universal stress protein family. The results generated for the protein sequences were Domain hits, Align details, and Features. We downloaded the Features into a file and removed the comment section such that the dataset on functional sites is in a tab-delimited file ready as input for visual analytics.
The data fields in the Features file are (1) Query (obtained from FASTA header); (2) Type of protein domain (e.g., specific or superfamily); (3) Title (e.g., Ligand-Binding Site); (4) coordinates (amino acid and position, e.g., P9, V10, D11, C39, M108, G109, R111, G112, G122, S123, V124, T125); (5) complete size (the expected number of functional sites, e.g., 12); (6) mapped size (observed functional sites, e.g., 12); and (7) source domain (protein domain source of functional sites, e.g., 23,812 for the Usp domain). The data file has a record for each protein domain present in the sequence. Thus, it was possible to identify sequences with more than one protein domain including the tandem-type Usp domains. We constructed patterns from the coordinates to facilitate tasks on visual representations, interactions and analyses (such as categorizing and comparing sequences) in a visual analytics software. We developed Perl code to extract patterns from the amino acid coordinates. For example, from coordinates "P9, V10, D11, C39, M108, G109, R111, G112, G122, S123, V124, T125", the amino acid pattern "PVDCMGRGGSVT" and the amino acid position pattern "9_10_11_39_108_109_111_112_123_124_125" were extracted. Additional information on the Perl code and application beyond the amino acid sequences of the universal stress proteins is presented in the Appendix A ( Figure A1). For comparison and accuracy verification of the ATP-binding motif detection procedure, we extracted patterns from the sequences of 10 universal stress proteins from Mycobacterium tuberculosis (lab strain H37Rv), whose USPs were extensively investigated for ATP-binding capacity. We performed scripting tasks on computing hardware including a large memory computer cluster (carbonate.uits.iu.edu) configured to support high-performance, data-intensive computing at the National Center Genome Analysis Support (NCGAS), Indiana University [39]. Figure 2 is an overview of the distribution of the USP genes in Desulfovibrionaceae genomes according to sequencing status and USP gene count. Our bioinformatics investigation for protein domain composition of 3407 protein sequences from Desulfovibrio desulfuricans ND132, a bacterial mercury methylation, identified three additional USP genes to make 13 USP genes. Therefore, we collected into a text file 719 FASTA formatted protein sequences annotated to contain the universal stress protein domain. In Figure 3, we present a comparison of the protein sequence features for six Desulfovibrionaceae genomes including from five that encode the gene pair for mercury methylation. Desulfovibrio africanus, Desulfovibrio desulfuricans ND132 and Desulfovibrio halophilus DSM 5663 are mercury-methylating Desulfovibrionaceae species. Among the genomes of the mercury-methylating Desulfovibrio africanus (reclassified as Desulfocurvibacter africanus), there is an additional gene for strain Walvis Bay encoding a 150 aa universal stress protein ( Figure 3). Furthermore, strain DSM 2603 has an additional 282 aa universal stress protein. In the case of Desulfovibrio desulfuricans ND132, the groups of amino acid lengths (aa) observed are 139, 146, 148, 156, 162, 265, 288, 294, 295, 297, 310 and 629. The Desulfovibrio gilichinskyi K3S genome encodes five universal stress proteins including a USP with 630 aa. Desulfovibrio halopilus DSM 5663 encodes seven universal stress proteins including two protein sequences with lengths 146 aa and 297 aa) that were also predicted from the genomes of Desulfovibrio desulfuricans ND132 and Desulfovibrio gilichinskyi K3S. In Figure 3, we present a comparison of the protein sequence features for six Desulfovibrionaceae genomes including from five that encode the gene pair for mercury methylation. Desulfovibrio africanus, Desulfovibrio desulfuricans ND132 and Desulfovibrio halophilus DSM 5663 are mercury-methylating Desulfovibrionaceae species. Among the genomes of the mercury-methylating Desulfovibrio africanus (reclassified as Desulfocurvibacter africanus), there is an additional gene for strain Walvis Bay encoding a 150 aa universal stress protein ( Figure 3). Furthermore, strain DSM 2603 has an additional 282 aa universal stress protein.
Count of Universal Stress Protein Genes in Desulfovibrionaceae Genomes
In the case of Desulfovibrio desulfuricans ND132, the groups of amino acid lengths (aa) observed are 139, 146, 148, 156, 162, 265, 288, 294, 295, 297, 310 and 629. The Desulfovibrio gilichinskyi K3S genome encodes five universal stress proteins including a USP with 630 aa. Desulfovibrio halopilus DSM 5663 encodes seven universal stress proteins including two protein sequences with lengths 146 aa and 297 aa) that were also predicted from the genomes of Desulfovibrio desulfuricans ND132 and Desulfovibrio gilichinskyi K3S. organisms 2021, 9, x FOR PEER REVIEW 8 of 18
Protein Domain Composition and Functional Sites of Desulfovibrionaceae Universal Stress Proteins
The results of the NCBI Batch Web Conserved Domain Search (CD-Search) bioinformatics tool for the 719 protein sequences included predictions on the type and position of the functionally relevant amino acid residues as well as the protein domain(s). The four types of protein domains with functional sites were (1)
Protein Domain Composition and Functional Sites of Desulfovibrionaceae Universal Stress Proteins
The results of the NCBI Batch Web Conserved Domain Search (CD-Search) bioinformatics tool for the 719 protein sequences included predictions on the type and position of the functionally relevant amino acid residues as well as the protein domain(s). The four types of protein domains with functional sites were (1) We identified 651 universal stress protein sequences that have at least one conserved USP domain model (Position Specific Scoring Matrix Identifier (PSSM-ID) for the USP domain is 23,812). Additionally, we observed 353 patterns (signatures) of amino acid residues (functional sites) associated with 247 amino acid position patterns. For example, an amino acid pattern "AVDVMGHGGSVA" had the highest occurrence in 54 and is associated with amino acid position patterns: The amino acid position pattern of 9_10_11_39_111_112_114_115_125_126_127_128 was restricted to sequences from three Bilophila species with Locus Tags (HMPREF0178_03304, T370DRAFT_02139, and HMPREF0179_03080). Based on the ATP-binding motif of G2XG9XG(S/T), our algorithm (a calculated field in the visual analytics software) classified the 353 functional site amino acid patterns into two motif types: 236 (non-ATP-binding motif) and 117 (ATP-binding motif). We designed a visual representation to grouped the 651 protein sequences by amino acid sequence length, amino acid pattern and amino acid position pattern (Figure 4 shows a subset for three genomes: D. desulfuricans ND132, D. halophilus DSM 5663 and D. gilichiniskyi K3S). The design allowed us to identify proteins with identical amino acid sequence length and pattern of functional site (for example, the 146 aa and 297 aa sets encoded by the three genomes). We observed 13 types of functional site patterns from 10 of the 13 universal stress protein sequences predicted from Desulfovibrio desulfuricans ND132 ( The DND132_2657 gene for a 146 aa protein was among the 54 Desulfovibrionaceae USP genes encoding the ATP-binding functional site pattern "AVDVMGHGGSVA". The protein domain arrangement of DND132_2717, a 629 aa protein sequence, comprised of a metal ion transporter domain and a USP domain with functional sites that do not conform with the ATP-binding motif. Comparison of amino acid sequence length and protein domain composition provided evidence that among the Desulfovibrionaceae genomes investigated the combination of metal ion transport domain and universal stress protein unique to D. desulfuricans ND132 and D. gilichinskyi K3S (previously named Desulfovibrio algoritolerance K3S). The IMG/M Gene ID, Locus Tag and amino acid sequence length for the equivalent gene of DND132_2717 in Desulfovibrio gilichinskyi K3S is 2709103738 and Ga0139011_2749 and 630 aa, respectively. Both protein sequences have an identical amino acid pattern of "ALGVMGHGGETV". Figure A2 in the Appendix A presents profiling of 92 Desulfovibrionaceae USPs by amino acid pattern, amino acid length, ATP-Binding motif, and mercury methylation status of source bacteria.
In Figure 5, the visual analytics design integrates the amino acid length, ATP-binding prediction of the USP domain, and the mercury methylation status of the source bacteria. The view among other functions allow for the categorization of a tandem-type USP by the types of ATP-binding prediction (Y = ATP-binding for both domains; N = Both domains are not ATP-binding; and * = One domain is ATP-binding and other does not bind ATP). The findings were confirmed with the NCBI Conserved Domains resource for three tandem-type USPs encoded in Desulfovibrionaceae genomes that encode the hgcA and hgcB proteins ( Figure 6). Figure A2 in the Appendix A presents profiling of 92 Desulfovibrionaceae USPs by amino acid pattern, amino acid length, ATP-Binding motif, and mercury methylation status of source bacteria.
In Figure 5, the visual analytics design integrates the amino acid length, ATP-binding prediction of the USP domain, and the mercury methylation status of the source bacteria. The view among other functions allow for the categorization of a tandem-type USP by the types of ATP-binding prediction (Y = ATP-binding for both domains; N = Both domains are not ATP-binding; and * = One domain is ATP-binding and other does not bind ATP). The findings were confirmed with the NCBI Conserved Domains resource for three tandemtype USPs encoded in Desulfovibrionaceae genomes that encode the hgcA and hgcB proteins ( Figure 6).
Discussion
We have conducted bioinformatics investigations on the universal stress proteins encoded in Desulfovibrionaceae genomes including genomes of strains that methylate mercury. Prior to our study, the characterization of Desulfovibrionaceae USPs was limited to genome-wide transcriptome or proteome analyses [17,32,40]. Our report provides findings from protein sequence analytics to guide further research on the molecular functions, biological processes and cellular components associated with Desulfovibrionaceae universal stress proteins (USPs). In our prior publications, we have applied bioinformatics and developed visual analytics resources to understand the universal stress proteins of taxonomic groups namely viridiplantae, Bacillus, Schistosoma, Alcanivorax, Brucella and Lactobacillus [35,[41][42][43][44][45][46]. In this report, we have made noteworthy findings on a collection of 719 Desulfovibrionaceae USPs regarding (1) protein domain arrangement; and (2) functional amino acid residues (Figure 1).
The observed counts of USP gene per genome among 93 Desulfovibrionaceae genomes ranged from 1 to 16 (Figure 2). The count of USP genes per genome could reflect the diverse phenotypic properties and habitats of the Desulfovibrionaceae members. The number of USP gene per genomes of Escherichia coli and Mycobacterium tuberculosis are six and ten, respectively [23,47]. The genomes of three Halodesulfovibrio aestuarii strains had 16 USP
Discussion
We have conducted bioinformatics investigations on the universal stress proteins encoded in Desulfovibrionaceae genomes including genomes of strains that methylate mercury. Prior to our study, the characterization of Desulfovibrionaceae USPs was limited to genomewide transcriptome or proteome analyses [17,32,40]. Our report provides findings from protein sequence analytics to guide further research on the molecular functions, biological processes and cellular components associated with Desulfovibrionaceae universal stress proteins (USPs). In our prior publications, we have applied bioinformatics and developed visual analytics resources to understand the universal stress proteins of taxonomic groups namely viridiplantae, Bacillus, Schistosoma, Alcanivorax, Brucella and Lactobacillus [35,[41][42][43][44][45][46]. In this report, we have made noteworthy findings on a collection of 719 Desulfovibrionaceae USPs regarding (1) protein domain arrangement; and (2) functional amino acid residues (Figure 1).
The observed counts of USP gene per genome among 93 Desulfovibrionaceae genomes ranged from 1 to 16 (Figure 2). The count of USP genes per genome could reflect the diverse phenotypic properties and habitats of the Desulfovibrionaceae members. The number of USP gene per genomes of Escherichia coli and Mycobacterium tuberculosis are six and ten, respectively [23,47]. The genomes of three Halodesulfovibrio aestuarii strains had 16 USP genes, the highest observed among the 93 genomes investigated. The Halodesulfovibrio species tolerates up to 6% (w/v) sodium chloride (NaCl) with optimum growth at 1.5-3.5% (w/v) [48]. Future research could investigate the relationship between universal stress protein function and mercury methylation in the NaCl tolerant Desulfovibrio halophilus DSM 5663.
The genomes of three Desulfovibrio desulfuricans desulfuricans strains namely ATCC 27774, DSM 642 and DSM 7057 had only two USPs compared to 10 USP genes (retrieved from the IMG/M resource) for strain D. desulfuricans ND132. The finding of an excess number of USP genes further supports the reclassification of strain ND132. Recent phylogenetic analyses have clustered strain ND132 with validly published and reclassified members of Pseudodesulfovibrio genus including mercury-methylating Pseudodesulfovibrio hydrargyri BerOc1 [49,50]. A February 2021 publication formally described strain ND132 as Pseudodesulfovibrio mercurii ND132 [51]. We recommend comparative analysis of the universal stress proteins of Pseudodesulfovibrio strains to determine the effects of protein domain composition and genomic context of USP genes on stress response and methylmercury production.
Based on the ATP-binding motif of G2XG9XG(S/T), our algorithm (a calculated field in the visual analytics software, Tableau) categorized the 353 functional site amino acid patterns into two motif types 236 (non-ATP-binding motif) and 117 (ATP-binding motif) ( Figure 5). For tandem-type USPs, we developed visual analytics views that provides three categories according to ATP-binding ( Figure 6). Future research can investigate the biological significance of these categories. The Desulfovibrio desulfuricans ND132 protein sequences for DND132_1399, DND132_2319, and DND132_2657 have evidence for ATP-binding. Research investigations are required to understand the molecular function, biological processes and cellular components of the predicted ATP-binding USPs of strain ND132. The ATP-binding universal stress proteins are predicted to function in energydependent biological processes [52]. Examples of ATP (energy)-regulated processes are: (1) the regulation of entry into chronic persistent growth phase in Mycobacterium tuberculosis [28]; (2) the response to acid stress condition during the exponential growth phase in Listeria innocua [53]; (3) susceptibility of Mycobacterium tuberculosis; and (4) survival of Mycobacterium smegmatis in human monocyte cells [52]. The visual analytics resource accompanying this report provides a resource for interacting with the datasets on predicted ATP-binding status. Further, the genomic context or neighborhood of the USP genes can provide insights on the molecular function, biological processes and cellular components of the universal stress proteins of strain ND132.
Among the four universal stress protein sequences of strain ND132 that contain two protein domains (DND132_1487, DND132_1547, DND132_1386, and DND132_2717), only DND132_2717 (a 629 aa protein) has a metal ion transporter domain (pfam01566 or PF01566: natural resistance-associated macrophage protein (NRAMP) domain) (Figure 3). The transmembrane NRAMP family of transporters function as divalent metal ion transporters from bacteria to humans [54]. Thus, we recommend research to determine (1) if DND132_2717 transports inorganic divalent mercury ions (Hg 2+ ); (2) if DND132_2717 localizes to the membrane; and (3) if DND132_2717 function is regulated by the universal stress protein domain. The divalent metal cation transporter is listed among metal transporters impacted by the deletion of hgcAB genes of strain DND132 [55]. A yeast divalent cation transporter DMT1 of participates in the uptake of inorganic mercury [56]. Research publications on the uptake of inorganic mercury in mercury-methylating Desulfovibrionaceae species and related organisms could guide these future studies [57][58][59][60].
The results of bioinformatics investigations are influenced by several factors including the version of software and updates to datasets. The taxonomy of the Desulfovibrionaceae has recently been updated including reclassification and formal description of strain ND132 [51,61,62]. Our investigation has considered these limitations and have included information on when the investigations were conducted. We also use multiple approaches, databases and genomic data to achieve consensus results. We have provide results as part of visual analytics resources to support the formulation of new problems for investigations beyond those reported here. The visual analytics resources can also serve as resources for educational interventions for learning biological data investigation [63]. We are also using the methods and findings to investigate denitrification potential of bacterial communities of Eastern Oyster (Crassostrea virginica) found in benthic environments [64].
Conclusions
We have determined protein domain composition and ATP-binding functional sites to categorize a collection of 719 genes predicted to encode the universal stress protein (USP) domains in 93 Desulfovibrionaceae genomes. The key findings are the categories of universal stress protein sequences according to (1) (3) one USP domain with ATP-binding motif (21 sequences). We developed visual analytics resources to facilitate the investigation of these categories of datasets in the presence or absence of the mercury-methylating gene pair (hgcAB). Future research could utilize these functional categories to investigate the participation of universal stress proteins in the bacterial cellular uptake of inorganic mercury and methylmercury production, especially in anaerobic aquatic environments.
Supplementary Materials: The online versions of the interactive analytics resources produced are available at https://public.tableau.com/app/profile/qeubic/viz/uspdesulfofamily/overview. Figure A1: Evidence that the process for analytics of functional sites of protein sequences can be applied beyond the universal stress proteins; Figure Data Availability Statement: The Perl code, input sequences, and output datasets used in this report for the analytics of the conserved protein domains are available on the GitHub software development platform at https://github.com/qeubic/protein_features (accessed on 20 August 2021).
Appendix A
The computer programming language code for preparing the output from NCBI Conserved Domain search can be applied to any collection of fasta-formatted protein sequences or list of NCBI protein identifiers. In this report, we applied the predicted protein sequences from the genome sequence of Desulfovibrio desulfuricans ND132. The input sequences are from the Integrated Microbial Genomes/Microbiomes (IMG/M) resource. The Perl code, input sequences, and output datasets used in this report for the analytics of the conserved protein domains are available on the GitHub software development platform at https://github.com/qeubic/protein_features (accessed on 20 August 2021).
A visual analytics resource for protein sequence analytics is available at https://public.tableau.com/app/profile/qeubic/viz/uspdesulfofamily/figureA1 (accessed on 20 August 2021). The designs are for interacting with the data on the protein families including protein domain composition and functional sites. Figure A1 shows that the protein sequence analytics procedure can be applied to other protein groups (e.g., nitrogen metabolism protein groups that have "nitr" in the gene/protein name). Figure A1. Evidence that the process for analytics of functional sites of protein sequences can be applied beyond the universal stress proteins. The visual representation integrates the protein domain identifier (source protein domain (PSSM-ID)), locus tag, gene name, amino acid pattern and amino acid position pattern. The protein sequences for Desulfovibrio desulfuricans ND132 were obtained from the Integrated Microbial Genomes and Microbiomes (IMG/M) system. The protein sequences with annotation for nitrogen metabolism are relevant to our research on the denitrification by microbial communities in oysters [64].
The integration of disparate features of the universal stress proteins from 50 Desulfovibrionaceae genomes can facilitate comparative analysis and planning of future research. Therefore, we have constructed a visualization that integrates the amino acid pattern, amino acid length, ATP-binding motif, and mercury methylation status of source bacteria ( Figure A2. The mercury methylation status was obtained from the data page of the Biogeochemical Transformations at Critical Interfaces project of the Oak Ridge National Laboratory's Mercury Science Focus Area [12,20]. A total of 92 Desulfovibrionaceae USPs were profiled according to 12 amino acid patterns, 28 amino acid lengths, two ATP-binding motifs, and mercury methylation status of the source bacteria. The Figure A1. Evidence that the process for analytics of functional sites of protein sequences can be applied beyond the universal stress proteins. The visual representation integrates the protein domain identifier (source protein domain (PSSM-ID)), locus tag, gene name, amino acid pattern and amino acid position pattern. The protein sequences for Desulfovibrio desulfuricans ND132 were obtained from the Integrated Microbial Genomes and Microbiomes (IMG/M) system. The protein sequences with annotation for nitrogen metabolism are relevant to our research on the denitrification by microbial communities in oysters [64].
The integration of disparate features of the universal stress proteins from 50 Desulfovibrionaceae genomes can facilitate comparative analysis and planning of future research. Therefore, we have constructed a visualization that integrates the amino acid pattern, amino acid length, ATP-binding motif, and mercury methylation status of source bacteria ( Figure A2. The mercury methylation status was obtained from the data page of the Biogeochemical Transformations at Critical Interfaces project of the Oak Ridge National Laboratory's Mercury Science Focus Area [12,20]. A total of 92 Desulfovibrionaceae USPs were profiled according to 12 amino acid patterns, 28 amino acid lengths, two ATP-binding motifs, and mercury methylation status of the source bacteria.
|
2021-08-28T06:17:19.956Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "bae4e8aaa87200946330a5ba62eee5742ce6ea23",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/9/8/1780/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "32a728dce64a9ca16fc8f168532d7f40c5615cef",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235231884
|
pes2o/s2orc
|
v3-fos-license
|
Analyzing the effect of electrode size on electrogram and activation map properties
Background: Atrial electrograms recorded from the epicardium provide an important tool for studying the initiation, perpetuation, and treatment of AF. However, the properties of these electrograms depend largely on the properties of the electrode arrays that are used for recording these signals. Method: In this study, we use the electrode ’ s transfer function to model and analyze the effect of electrode size on the properties of measured electrograms. To do so, we use both simulated as well as clinical data. To simulate electrogram arrays we use a two-dimensional (2D) electrogram model as well as an action propagation model. For clinical data, however, we first estimate the trans -membrane current for a higher resolution 2D modeled cell grid and later use these values to interpolate and model electrograms with different electrode sizes. Results: We simulate electrogram arrays for 2D tissues with 3 different levels of heterogeneity in the conduction and stimulation pattern to model the inhomogeneous wave propagation observed during atrial fibrillation. Four measures are used to characterize the properties of the simulated electrogram arrays of different electrode sizes. The results show that increasing the electrode size increases the error in LAT estimation and decreases the length of conduction block lines. Moreover, visual inspection also shows that the activation maps generated by larger electrodes are more homogeneous with a lower number of observed wavelets. The increase in electrode size also increases the low voltage areas in the tissue while decreasing the slopes and the number of detected deflections. The effect is more pronounced for a tissue with a higher level of heterogeneity in the conduction pattern. Similar conclusions hold for the measurements performed on clinical data. Conclusion: The electrode size affects the properties of recorded electrogram arrays which can respectively complicate our understanding of atrial fibrillation. This needs to be considered while performing any analysis on the electrograms or comparing the results of different electrogram arrays.
Introduction
Recording and processing of electrograms (EGMs) is the cornerstone of mapping procedures guiding ablative therapies of cardiac arrhythmias.Thorough understanding of the impact of recording technology on EGM morphology is of paramount importance, particularly in case of complex tachyarrhythmias such as atrial fibrillation (AF) [1,2].However, the properties of these electrograms depend to a large extent on the physical dimensions of the electrode arrays that are used for recording these signals.
As shown in several studies, the electrode's size (or diameter) is an important parameter that can affect the characteristics of the recorded electrograms.Most studies focus on bipolar electrograms and measurements that are performed only on clinical recordings [3][4][5][6][7].There are only a few studies investigating the effect of the electrode size on the properties of the unipolar electrograms using both electrophysiological models and clinical recordings.These properties include signal-to-noise ratio (SNR), fractionation level, voltage level, and the error in local activation time (LAT) estimation [8][9][10].In general, these studies show that increasing the electrode size increases the SNR and consequently the atrial activity is less affected by noise and artifacts [11,12].However, this is mainly the case for homogeneous tissue.It has also been shown that the number of deflections and the level of fractionation also increases by increasing the electrode size [7][8][9].Moreover, low voltage areas in the tissue also increase when using bigger electrode sizes [6,8].It has also been shown that using larger electrodes, increases the error in LAT estimation [10], which will result in activation maps with less detail and less conduction blocks [3,5].These effects are more pronounced in zones of conduction block or slow conduction in the tissue (i.e., scarred tissue) [3,4].
Some of these studies use clinical recorded electrograms using different electrogram arrays with different electrode sizes that are successively positioned on similar locations on the atria.However, it should be noted that not all observed changes in the electrogram properties are due to changes in electrode size, but also due to the spatio-temporal varying nature of electrical wave propagation (specially during AF) and the changes in the inter-electrode distances.Moreover, bipolar electrograms are also affected by varying propagation directions.In this study, we exclusively investigate the effect of the electrode size on the properties of high resolution unipolar electrogram arrays by keeping the other parameters like inter-electrode distances and electrical wave propagation patterns fixed.We use both clinical observation and electrophysiological models that govern the wave propagation and electrogram generation to analyze and investigate these effects.This is done by investigating the electrode's transfer function and its properties, making the comparisons and conclusions more robust.Moreover, we investigate these results for tissues with different levels of inhomogeneity in the conductivity map.We also focus on the overall properties of the electrogram array and not only on the per-electrode features.These include the overall error in local activation time estimation, length of slow conduction or block lines, low voltage areas in the tissue, as well as the number of deflections.
We first present the electrogram model, our approach for generating simulated electrograms, and our proposed framework for generating high resolution electrogram arrays with different electrode sizes from clinical electrograms in Section 2. We also provide a description of our clinical recording setup and employed measures for characterizing the electrogram array properties in the same section.In Section 3 and Section 4, we perform our approach on simulated and clinical electrograms, respectively, and present the final results.We also discuss the optimal electrode diameter and appropriate inter-electrode distances for electrode arrays with different electrode sizes, the maximum electrode size for capturing scarred tissue of different sizes, and a proper scaling of the electrogram amplitude for a better comparison of electrograms recorded with different arrays.We discuss our findings and conclude the study in Section 5.
Atrial tissue computer model
In our model we consider the atrial tissue as a two dimensional mono-layer grid of cells where the electrode array is positioned at a constant height z 0 above the atrial tissue.We model the electrogram as a weighted summation of trans-membrane currents produced by the cells in the tissue in the vicinity of the electrode, where the weights depend on the inverse of the cell-to-electrode distances.An electrogram at location (x, y) and at time sample t can then be modeled as [13].
where m = 1, 2, ⋯, M is the electrode index with M the total number of electrodes, I tr (x, y, t) is the trans-membrane current, A is the area in which the modeled cells are located, A(x, y) is the area variable, and σ e is the constant extra-cellular conductivity.Note that for now we assume that we are recording the electrograms with point electrodes whose diameters can be neglected.We will investigate the effect of electrode diameter later in Section 2.2.The transmembrane current produced by each cell can be computed using the following equation [14].
where V(x, y, t) is the per cell potential, Σ(x, y) is the intracellular conductivity tensor, and S v = 0.24μm − 1 is the cellular surface-to-volume ratio.The potential and trans-membrane current can simultaneously be calculated using the reaction-diffusion equation that governs the action potential propagation in the tissue [14], where C = 1μFcm − 2 is the total membrane capacitance, I st is the stimulus current, and I ion is the total ionic current computed according to the Courtemanche model in Ref. [15].
Electrode's transfer function model
For a uniform grid of cells with Δx denoting the cell-to-cell distance, we can rewrite Eq. ( 1) as a 2D spatial convolution of transmembrane currents with an appropriate electrode transfer function R 0 (x, y) as where * * denotes the 2D spatial convolution, and c = Δx 2 /4πσ e contains all constants and will be omitted for simplification.We introduced in Eq. ( 4) the sampling operator S 0 (x, y) = ∑ m δ(x − x m , y − y m ) with Dirac delta functions to select the M spatial locations on the grid on which we have measurements and replace the other locations with zero.This can also be used to de-select faulty electrodes.The electrode transfer function is where r = ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ ̅ x 2 + y 2
√
is the horizontal distance between the electrode (at origin) and a cell at location (x, y).However, an electrode whose diameter is bigger than the modeled cell size (i.e., d 0 > Δx) can no longer be considered as a point electrode.The transfer function should therefore take the diameter of the electrode into account as proposed in Ref. [8], i.e., The first row of Fig. 1 shows the 2D representation of the electrode normalized transfer function of a point electrode from Eq. ( 5) as a function of x and y, as well as the normalized transfer function based on r, both computed by setting z 0 = 0.5 mm.As can be seen, the value of the transfer function, or the weight of the trans-membrane current in Eq. ( 4), for the cell that is exactly under the electrode's center equals to 1.A large weight indicates more influence on the final recorded electrogram.As we move further from the center, the weights decrease but the values are still noticeable.One important parameter commonly used to characterize the transfer function is the full width at half maximum (FWHM), denoted on the plot.It shows the diameter at which the weight of the cells is greater than half of the maximum value, indicating their significant influence on the final recorded electrogram.A small FWHM (a narrower transfer function) denotes that the electrode records data from a smaller area thus providing more local electrograms.However a large FWHM denotes that the recorded electrogram will be the summation of activities in a larger area.This can severely affect the morphology of the local activities if the propagation is not homogeneous in the electrode's neighborhood.In general, summing up activities in a larger area will smooth out the important local details.
The bottom left plot in Fig. 1 shows the transfer functions for different electrode diameters d 0 ∈ 0.5, 2, 4, 8 mm with z 0 = 0.5 mm.The bottom right plot in Fig. 1, also shows the transfer functions when z 0 = 1 mm, assuming a thicker tissue.As can be seen in both plots, as the electrode diameter increases, the transfer function gets wider (larger FWHM) indicating that the recorded electrogram will be more influenced by the neighboring activities.The difference in FWHM is more evident in thinner tissues and for smaller diameters.This will be investigated in more detail in Section 3.3.
Modeling abnormal tissue
To generate simulated fractionated electrograms that are representative for clinical data, various approaches have been suggested in literature.Jacquemet et al. in Ref. [8] incorporate the heterogeneities in the conductivity as a set of randomly positioned lines of conduction block that disconnect the coupling between the cells on the grid.However, Vigmond et al. in Ref. [16] model the conduction disturbance by randomly disconnecting the coupling between some modeled cells and their neighbors through randomly positioned dots of conduction block.In this study we use both patterns simultaneously for simulated conductivity maps of modeled tissue.This provides simulated electrograms and activation maps that are more similar to clinical recordings (by visual inspection).
To simulate electrograms with different levels of fractionation, we use conductivity maps with varied levels of conduction block density.These are shown in the first row of Fig. 2 denoted as T1, T2 and T3 having a low, medium and high density of conduction block, respectively.For comparison, we have also shown the results for a homogeneous tissue with planar wave propagation, denoted by T0, which serves as a standard reference for other tissue types.The size of each tissue is 213 × 173 cells, with a cell-to-cell distance of Δx = 0.5 mm.We also activate the tissues using one or two activation waves entering the tissue from different locations to simulate the activation maps during AF.
To model action potential propagation in the simulated tissues, Eq. ( 3) is discretized and solved using a finite difference method with no flux boundary condition.The activities are simulated for 1000 ms to include one complete atrial beat, but only a selected time window of 150 ms in length is used for evaluation of electrograms as it includes all the atrial activities.A more detailed description of the simulation steps and parameters can be found in Ref. [17].The resulting activation maps are shown in the second row of Fig. 2. Each pixel in the activation map represents the true activation time of its corresponding cell which is annotated as the time when the cell's potential V reaches a threshold value of − 40 mV in the depolarization phase of its action potential.The white pixels belong to the cells that were positioned on a conduction block and did not get activated.Finally, Eq. ( 1) is used to compute the simulated electrograms recorded by an assumed electrode array of size 77 × 33 cells positioned on the center of the tissue at a constant height of z 0 = 0.5 mm, which is denoted by a red rectangle on the maps.The last panels in Fig. 2 show example electrograms from each tissue computed for four different electrode diameters d 0 ∈ 0.5, 2, 4, 8 mm.
Clinical studies
The study population consisted of 10 adult patients undergoing surgery in the Erasmus Medical Center Rotterdam.This study was approved by the institutional medical ethical committee (MEC2010-054/MEC2014-393) [18,19].Written informed consent was obtained from all patients.Patient characteristics (e.g.age, medical history, cardiovascular risk factors) were obtained from the patient's medical record.Epicardial high-resolution mapping was performed prior to commencement to extra-corporal circulation, as previously described in detail [20].A temporal bipolar epicardial pacemaker wire attached to the RA free wall served as a reference electrode.A steel wire fixed to subcutaneous tissue of the thoracic cavity was used as an indifferent electrode.Epicardial mapping was performed with a 192-electrode array (electrode diameter 0.45 mm, inter-electrode distances 2.0 mm).The electrode array is subsequently positioned visually by the surgeon on 9 mapping atrial sites using the anatomical borders.We only use the data recorded from Bachmann's bundle.Ten seconds of induced AF were recorded from every mapping site, including a surface ECG lead, a calibration signal of 2 mV and 1000 ms, a bipolar reference EGM and all unipolar epicardial EGMs.Data was stored on a hard disk after amplification (gain 1000), filtering (bandwidth 0.5 to 400 Hz), sampling (1kHz) and analogue to digital conversion (16 bits).
Interpolating (clinical) electrograms and estimating electrograms for different electrode sizes
Due to the unstable and unpredictable nature of electrical wave propagation during AF, it is not possible to repeat similar recordings with different electrode arrays (having different electrode diameters and therefore different inter-electrode distances) during AF.To overcome this issue and to interpolate and estimate the electrograms recorded with different arrays, we first estimate high resolution transmembrane currents and subsequently model the effect of larger electrode dimensions.We discretize the 3D tissue activity in space.We use source clamping and replace each block of cells in the real three dimensional tissue with a modeled "cell" on a uniform 2D grid of cells similar to the simulated data.Next, we estimate the high resolution transmembrane currents using Eq. ( 4) and the recorded electrograms.This can be done by solving the following regularized optimization problem as suggested in Ref. [21].
where and where λ is the regularization parameter.Employing the ℓ 1 -norm regularization function (i.e., ‖ .‖ 1 ) helps to preserve the main features of the transmembrane currents among which sparse fast temporal changes (deflections).These are of high importance for correct LAT estimation.More details on an efficient approach to solve Eq. ( 7) can be found in Ref. [22].After estimating the high resolution trans-membrane current, we can estimate different electrograms for varying electrode sizes and inter-electrode distances using Eq. ( 4) with an appropriate transfer function from Eq. ( 6).
Electrogram analysis
Here, we introduce four measures that are used to characterize the properties of the electrogram arrays.Notice that both simulated and clinical electrograms' amplitudes are scaled with a constant value so that the amplitude of the electrograms of a homogeneous wave propagating through the tissue (as in tissue type T0) recorded by the smallest electrode (d 0 = 0.5 mm) equals 1 V.The scaling value is different for clinical and simulated recordings but similar for different electrode sizes.We characterize the properties of the simulated and clinical high resolution electrogram arrays recorded during one atrial beat using the following four measures: 1. LATE: percentage of large absolute errors in LAT estimation denoted by LATE.These are the error values that are larger than 10 ms.This measure is only evaluated for simulated tissue where we have access to true activation times.The true activation time is annotated as the time when the potential of the cell that is exactly under the electrode reaches the value of − 40 mV, insuring that the action potential is triggered.The estimated activation time of the simulated electrogram is annotated as the point with the steepest descent.The threshold value of 10 ms was selected heuristically.However different threshold values yield a similar pattern of changes.
2. LSC/B: length of lines of slow conduction or block in the tissue denoted by LSC/B.To compute this value we first find the delay between each cell and its four direct neighbors on the grid of cells.If the delay is bigger that 0.7 ms it will be considered as a slow conduction or block with the length of Δx.This threshold value is selected with respect to the standard delay between neighboring cells estimated from the standard tissue T0.Note that this is not a small threshold considering the cell-tocell distances of Δx = 0.5 mm in the simulation.Moreover, since we model inhomogeneity in the tissue using dots and small lines of block, their effects on the LAT also ranges from very small to large values.Fig. 3 (b) shows an example activation map with its lines of slow conduction or block denoted by thick black lines.
3. LVA: percentage of electrograms with lower peak-to-peak voltage than 0.2 V denoted by LVA.The peak-to-peak voltage is defined as the difference between the maximum and the minimum electrogram amplitude and is shown in an example electrogram in Fig. 3 (a).The threshold value was selected heuristically, making sure it is small enough to indicate the changes in between different tissue types and electrode diameters.
4. ND, SD, and MD: percentage of electrograms having no deflection (ND), a single deflection (SD), or multiple deflections (MD).We only count the deflections having a smaller average slope than − 0.02 V/ms.The threshold value was selected heuristically to avoid small negligible deflections caused by noise and artifacts.Fig. 3 (b) shows an example electrogram with 2 deflections.
It is important to note that all the measures are evaluated for high resolution electrogram arrays assuming that there is one electrode on top of each cell.This is not possible in practice because the interelectrode distance should be larger than the electrode diameter.However, the results will confirm that the changes in the measures are due to the changes in the electrode's size and not due to the different interelectrode distances.The above mentioned measures are computed using custom written MATLAB codes.
Effect of electrode size on electrogram properties
In this section, we investigate the effect of the electrode size on the properties of the simulated electrograms using the measures we introduced in Section 2.5.Five randomly generated conductivity maps were modeled for each tissue type T1 to T3, which were previously shown in Fig. 2. The tissues were stimulated with one or two activation waves entering the tissue from different locations and the resulting electrograms were computed for four different electrode diameters d 0 ∈ 0.5, 2, 4, 8 mm.The last row of Fig. 2 shows an example electrogram from each tissue computed for the four different electrode diameters.For a better comparison, Fig. 4 also shows simulated electrograms for different electrode sizes in one plot.These electrograms belong to T5 in Fig. 5 with two distinct deflections as a result of a long line of block (for more details see Section 3.4).
The measures introduced in Section 2.5 were evaluated for all 2541× 5 = 12705 simulated electrograms of each tissue type (77× 33 = 2541 electrograms per map) and are presented in Tables 1-4.As can be seen in the resulting tables, increasing the electrode size increases the error in LAT estimation while the length of detected slow conduction or block lines in the tissue decreases.Except for the homogeneous tissue T0, where using bigger electrode diameters results in an increase in LSC/B which is almost similar for all electrode diameters.As a result, the final activation maps seem smoother.This will be discussed in more detail in Section 3.2.
The percentage of low voltage areas in the tissue also increases by increasing the electrode size, indicating that using bigger electrodes decreases the amplitudes of recorded electrograms.However, by comparing the results of lower voltage areas in Tables 1-4, it seems that a larger diameter is more useful at indicating the differences in low voltage areas of different tissue types and respectively the mean conductivity of the underlying substrate, even if the discrete block lines in the simulation are missed.
The percentage of single and multiple deflections in the electrograms decreases by increasing the diameter.This is because the slope of some of the deflections gets very small and it will not be annotated as a deflection anymore.
However, as can be seen in the tables, the variations in electrogram properties caused by using different electrode diameters are more evident in tissues with higher level of heterogeneity or more scarred tissues.This also indicates that electrograms generated in homogeneous tissues will not be much affected by the electrode diameter.
Effect of electrode size on the activation map
We use some examples to visualize the effect of electrode size on the resulting activation maps.As discussed earlier in Section 3.1, using bigger electrode sizes increases the error in LAT estimation and decreases the length of detected conduction block lines.This happens because the electrograms recorded by bigger electrodes are affected more by neighboring activities.Therefore, the deflection generated by larger and stronger inhomogeneous waves in the neighborhood may over-impose the small, but main, local deflections.As a result, the final activation maps seem smoother and more homogeneous.Fig. 4 shows an example of a simulated atrial activity recorded by different electrode sizes.As can be seen, two deflections are visible in each activity.The first deflection (at 90 ms) belongs to the local main activity and the second deflection (at 123 ms) belongs to a strong neighboring activity.As the electrode size increases, the second deflection gets steeper compared to the first deflection.Annotating the steepest descent as the LAT will then result in annotating the second deflection for electrode of diameters d 0 = 4 mm and 8 mm.That is an absolute error of about 33 ms in LAT estimation.
Fig. 5 shows estimated activation maps of two simulated tissues recorded by electrode arrays having different electrode sizes.These examples imitate the two patterns of common changes that we observed in clinical cases.T4 has the same tissue conductivity pattern as T2 (medium density of conduction blocks) with a different stimulation pattern (explained in Section 2.3) resulting in generation of complex fractionated atrial electrograms.As can be seen, small waves in the tissue are over-imposed by larger and stronger activities in their surrounding and the smooth variations from one color to another are replaced by sharp variations.In T5 (second row of Fig. 5) two lines of block are positioned along the center of the tissue.The activation map starts from the area in-between these lines and then propagates through the whole tissue.As can be seen, this abnormal area is completely lost when we use bigger electrode sizes.As mentioned before, this happens because the activities generated in this abnormal area have lower amplitude compared to the stronger activities outside the block lines and an electrode with a bigger size records more activities from its neighborhood.We discuss this case in more detail in Section 3.4.
Optimal electrode diameter and inter-electrode distance
The electrode transfer function in Eq. ( 6) and shown in Fig. 1 can be used for calculating the optimal electrode diameter and inter-electrode distance.This can be done by investigating the FWHM of different electrode diameters.Fig. 6 shows the calculated FWHM as a function of the electrode's diameter (with z 0 = 0.5 mm).This plot has two important features.First, even for a point electrode, FWHM= 1.73 mm is nonzero.Secondly, FWHM is almost constant for small diameters and the curve bends around d 0 = 0.5 mm.These observations can lead to two important conclusions: (i) Optimal electrode diameter: an optimal electrode diameter is around d 0 = 0.5 mm.This is the largest value with a similar FWHM to a point electrode.Note that smaller electrodes are affected more by noise so there is a trade off between high SNR and small FWHM.Therefore, it is also not preferred to use the smallest electrode possible.(ii) Optimal inter-electrode distance (in order to capture all the spatial information of the electrical activities in the tissue): To find this parameter, we first need to estimate the maximum spatial frequency that is presented in electrical activities.This can be a quite complicated task due to the three dimensional inhomogeneous structure of the atrial tissue and the complex unstable wave propagation patterns.On the other hand, no matter how high these spatial frequencies are, they will eventually be recorded by surface electrodes which inherently perform as a spatial low-pass filter.As shown in Ref. [22], the FWHM discussed in Section 2.2 can be also used as a short-hand measure of the appropriate inter-electrode distance.As an example for an electrode with d 0 = 0.5 mm, we suggest an optimal inter-electrode distance of around 1.9 mm, which is equal to its FWHM at z 0 = 0.5 mm.Fig. 7 shows the FWHM as a function of both d 0 and z 0 .As can be seen in this figure, as the electrode diameter or the electrode height (or equivalently tissue thickness) increases, the required inter-electrode distance also increases.This means that FWHM and the low-pass filtering effect of the electrode increases.This will result in losing spatial information by electrodes.Even putting them closer to each other in an array will not compensate that loss.Therefore, we can effectively use larger inter-electrode distances.Conversely, electrodes with smaller diameters have smaller FWHM and can potentially capture spatial information with higher frequencies and by putting these electrodes closer to each other on an array, we can record this information.
Maximum electrode size for recording scarred tissue
In this section, we perform an experiment to investigate the maximum electrode size for the detection of abnormal areas and conduction block lines using simulated tissue.We use the same pattern as in T5 in Fig. 5 for the scarred tissue where two lines of conduction block are positioned along the center of the tissue with a distance of L block .The activation wave starts from the area in between these lines and then propagates through the whole tissue.As can be seen in T5, this abnormal area is completely lost when we use bigger electrode sizes.As mentioned before, this happens because the activities generated in this abnormal area have lower amplitude compared to the stronger activities outside the block lines and an electrode with a bigger size records more activities from its neighborhood.We increased the distance between the two parallel lines of blocks and visually inspect the activation map estimated from different electrode arrays to determine the maximum electrode diameter d 0,max that can still provide some evidence of the abnormality in the tissue.The results can be seen in Fig. 8.As an example, the two block lines in T5 are distanced at L block = 3.5 mm, and, as can be seen in Fig. 8, the maximum electrode diameter that can still provide an evidence of this abnormality is around 4.2 mm.Notice that this is a simple example compared to complex clinical cases where the results are affected by many underlying parameters of the tissue.
Clinical results
As discussed in Section 2.4.1, due to the unstable and unpredictable nature of electrical wave propagation during AF, it is not possible to repeat similar clinical recordings with different electrode arrays.Therefore, to model these recordings, we use our approached presented in Section 2.4.1 to first estimate high resolution transmembrane currents and then use them to interpolate and calculate electrograms with bigger electrode sizes.The methodology used for estimation of trans-membrane current is discussed and evaluated in Ref. [22].Here, to evaluate its performance in reproducing the electrograms, we find the mean correlation coefficient between the real clinical electrograms and their simulated electrograms after finding the trans-membrane currents and recalculating the electrograms.The mean correlation coefficient is equal to 0.97 ± 0.004 (mean ± standard deviation) which indicates a good simulation.Note that we can only calculate this measure for the low resolution 24 × 8 clinical electrograms and for d 0 = 0.5 mm, as we do not have the ground-truth EGMs for bigger electrode sizes or higher resolutions.Fig. 9 shows four neighboring clinical (real) EGMs and the interpolated electrograms in between them.
Statistical analysis
A similar analysis as in Section 3.1 was performed on 10 clinical electrogram arrays of size 24 × 8 recorded from Bachmann's bundle for 10 different patients.The signals were recorded for 10 s during induced AF resulting in fractionated electrograms with various levels of fractionation.Note that the electrograms were initially interpolated and modeled for different electrode sizes.This resulted in 24 × 8 × 4 electrograms in total.The measures introduced in Section 2.5 were evaluated for one atrial beat of length 150 ms (visually selected to make sure each electrogram contained atrial activity) and are presented in Table 5.We did not present the result for the error in LAT estimation because we do not have access to the true values in clinical data.As can be seen in the table, the changes in the properties of electrograms recorded by different electrodes follows the same pattern as for the simulated data.
Changes in activation maps
Similar patterns as in Section 3.2 are also seen in the clinical data.Fig. 10 shows two examples of how the high resolution activation maps change by using different electrode sizes.As expected, the small deflections and small wavelets in T6 are over-imposed by larger and stronger activities in their surrounding area as the electrode size increases.This leads to a decrease at the total number of wavelets in the area.T7 also shows an example where the abnormal area with long delays in the activation map is partly or completely missed due to the increase in the electrode size.
Scaling electrograms' amplitude
A proper scaling of the electrograms' amplitudes recorded with different electrode sizes can to some extent compensate for differences in the measures that characterize the electrograms.We propose to use the ratio between the norm of the transfer functions of the electrodes for scaling their amplitudes for a better comparison of their recorded electrograms.This can be formulated as where R 0 (r) and R d0 (r) are calculated from Eqs. ( 5) and ( 6), φ is the scaled electrogram, and ‖ .‖ 2 is the Euclidean norm or l2-norm.This will make the measures like LVA and the number of deflections, more invariant to the electrode's diameter.However, it will not affect the estimation of LATs, or any parameter that is extracted from it like LSC/B.Table 6 shows the new measures (cf. the non-scaled version in Table 5) after using the norm of the distance kernel for scaling the data.Notice that approaches like the maximum amplitude or steepness of the recorded electrograms for scaling or normalizing them are realization based and thus less stable.Such parameters will depend on the propagation patterns and are prone to spatial and temporal variations, making the results incomparable and not generalizable.
Discussion and conclusion
In this paper, we studied the effect of electrode size on the properties of the recorded electrograms.We started by simulated electrograms of 2D atrial tissues and present the effect of different electrode sizes on electrogram properties including the error in LAT estimation, the length of slow conduction or blocks (LSC/B) observed on the resulting activation map, percentage of observed low voltage areas (LVA), and the number of deflections in the recorded electrograms.The results were then tested on clinical electrograms of 10 patients recorded from Bachmann's bundle.Since we had no access to the recorded electrograms recorded with different electrode diameters, we first estimated the high resolution transmembrane current maps using the approach in Ref. [22] and then used the currents and the electrode transfer function to generate such recordings.The results were comparable to those of simulated data.The results show that using bigger electrodes produces larger error in LAT estimation, which is in accordance with previous results shown in a previous study [10].These errors in LAT estimation will result in the estimation of a smoother activation map than the true map.These results were also observed in experiments on clinical bipolar electrograms in Refs.[3,5].
The electrograms recorded with a bigger electrode size are in general smoother with smaller slopes and smaller peak-to-peak voltages.Some of the deflections in these signals are so smooth that they are not annotated as a deflection in the recording.This will result in an increase in low voltage areas in the tissue, which agrees with the result of previous studies in Refs.[6,8].However, if there is no recording with a smaller electrode available for setting the proper thresholds, one might use smaller threshold values for detecting such deflections.This can result in detecting more deflections.This can explain why some studies such as [7][8][9] suggest that increasing the electrode size increases the fractionation level in the tissue.
However, by comparing the results of lower voltage areas in Tables 1-4, it seems that a larger diameter is more useful at indicating the difference in low voltage areas in between different tissue types with different inhomogeneity levels.Considering that in the inhomogeneous cardiac tissue, lower conductivity also means lower voltage in that area, we can conclude that bigger electrodes are more useful at indicating the mean conductivity of the underlying substrate, even if the discrete slow conduction or block lines in the activation map are missed.
Although these conclusions have been partly discussed or hinted in previous studies, in the current study, we focused on a more systematic approach towards them.By employing the electrophysiological models and the electrode's transfer function, we could analyze and discuss these effects in more depth.Moreover, the introduced approach for interpolating and modeling clinical electrograms with different electrode sizes allowed us to investigate these effects for recordings of similar wave propagation patterns.It also enabled us to only focused on the electrode's diameter and not the inter-electrode distances.
Moreover, we discussed the effect of the optimal electrode diameter and the required inter-electrode distances (or the array resolution) for capturing all the possible spatial information.We performed an experiment to investigate the required minimum electrode size for capturing an inhomogeneous activity in between two parallel block lines in the tissue.We also introduced a proper way for scaling electrograms with different electrode's diameter for a better comparison.
These results show the importance of the recording electrode array on the properties of the electrograms, and this needs to be considered in any further evaluation and analysis of the data; especially if considered for treatment such as electrogram-based ablations.
Study limitations
In this study, we modeled the 3D tissue of the atria as a 2D grid of cell assuming a constant electrode height of z 0 for the whole tissue that is under the electrode array.Although 3D forward tissue models of the atrial with varying values z 0 have already been developed in literature, employing them in an inverse problem for estimation of transmembrane current is not practical due to the complexity of these models.Moreover, that would require a proper estimation of z 0 for each recording site beforehand.
We did not have access to electrograms recorded from similar locations and different electrode sizes for a more complete evaluation of our results, as this is not possible in practice due to the temporal and spatial variations of the underlying wave propagation patterns during AF and especially for areas with complex fractionated electrograms.
The clinical electrograms used in this study were recorded from Bachmann's bundle with a predominant route of conduction from right to left and with a a potential role in AF [23] which may differ from the rest of atria.However, there are already regional differences in potentials in atria even during SR [24].Although our method does not depend on these specific properties, the exact results in Tables 1-6 may not be generalizable to other regions in atria.
We used similar threshold values for evaluation of the measures introduced in Section 2.5 for simulated and real electrograms.Although both types of electrograms were scaled such that a homogeneous planar wave has a maximum absolute amplitude of 1 V, exact selection of these parameters is more prone to error in real electrograms as we do not have access to such exact recordings.
Moreover, we ignored the effect of noise in our simulations.Although smaller electrodes provide sharper and more localized recordings, they are affected more by noise and artifacts.Therefore, using a smaller electrode may not always improve the recordings.
Declaration of competing interest
None Declared.
Fig. 2 .
Fig.2.Conductivity map, activation map and an example electrogram simulated for different electrode sizes of four simulated tissues.T0 is modeled as a homogeneous tissue while T1, T2 and T3 have a low, medium and high density of conduction block in their conductivity map, respectively.The red rectangle represents the electrode array position.We assume there is one electrode on top of each modeled cell.
Fig. 3 .
Fig. 3. (a) Unipolar electrogram with the peak-to-peak voltage denoted by the red two-sided arrow.If the peak-to-peak voltage is smaller than 0.2 V, the EGM will be counted as a low voltage electrogram.(b) Unipolar electrogram from (a) with two of its deflections (downward slopes) denoted by red lines.As can be seen there are more deflections in the signal but we only count those with an average slope that is smaller than − 0.02 V/ms.(c) Example activation map (a segment of T7 in Fig.10) with the lines of block denoted by black lines.These are the areas between two neighboring cells with delays in LAT that are larger than 0.7 ms.
Fig. 4 .
Fig. 4.An example of a simulated atrial activity recorded by different electrode sizes.
Fig. 5 .
Fig. 5. Activation map estimated from high resolution electrogram arrays with different electrode sizes.T4 has the same tissue conductivity pattern as T2 and in T5 two lines of block are positioned along the center of the tissue.
Fig. 8 .
Fig. 8. Maximum electrode diameter d 0,max for recording a visible scarred tissue, as a function of L block which denotes the distance between two lines of block.
Fig. 9 .
Fig. 9. Real (in orange) and interpolated electrograms (in blue) for clinical recordings with d 0 = 0.5mm.Note that for an easier inspection, only electrograms from dark blue electrodes have been shown.
Fig. 10 .
Fig. 10.Example activation maps estimated from high resolution electrogram arrays with different electrode sizes, estimated from clinically recorded electrogram arrays.
|
2021-05-29T06:16:51.716Z
|
2021-05-18T00:00:00.000
|
{
"year": 2021,
"sha1": "578507aff8cad2bd98bce7abc8c946a51239e6f8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.compbiomed.2021.104467",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ad64d3461e285a934da03df7a806357bd2c291f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
23692289
|
pes2o/s2orc
|
v3-fos-license
|
Female employees' perceptions of organisational support for breastfeeding at work: findings from an Australian health service workplace
Background Women's return to work can be a significant barrier to continued breastfeeding. Workplace policies and practices to promote and support continued, and longer duration of, breastfeeding are important. In the context of the introduction of a new breastfeeding policy for Area Health Services in New South Wales, Australia, a baseline survey was conducted to describe current practices and examine women's reports of perceived organisational support on breastfeeding intention and practice. Methods A cross sectional survey of female employees of the Sydney South West Area Health Service was conducted in late 2009. A mailed questionnaire was sent to 998 eligible participants who had taken maternity leave over the 20-month period from January 2008 to August 2009. The questionnaire collected items assessing breastfeeding intentions, awareness of workplace policies, and the level of organisational and social support available. For those women who had returned to work, further questions were asked to assess the perceptions and practices of breastfeeding in the work environment, as well as barriers and enabling factors to combining breastfeeding and work. Results Returning to work was one of the main reasons women ceased breastfeeding, with 60 percent of women intending to breastfeed when they returned to work, but only 40 percent doing so. Support to combine breastfeeding and work came mainly from family and partners (74% and 83% respectively), with little perceived support from the organisation (13%) and human resources (6%). Most women (92%) had received no information from their managers about their breastfeeding options upon their return to work, and few had access to a room specially designated for breastfeeding (19%). Flexible work options and lactation breaks, as well as access to a private room, were identified as the main factors that facilitate breastfeeding at work. Conclusions Enabling women to continue breastfeeding at work has benefits for the infant, employee and organisation. However, this baseline study of health employees revealed that women felt largely unsupported by managers and their organisation to continue breastfeeding at work.
Background
Breastfeeding is one of the most natural, protective and cost effective practices a mother engages in with her new infant [1][2][3]. Australian breastfeeding initiation rates at both the national and state level are high (around 90 percent) [4][5][6], but by three months, exclusive breastfeeding has dropped to 50 to 60 percent, and is at 15 percent at six months, well below the recommendation of exclusive breastfeeding for the first six months of life [4][5][6][7][8][9]. Breastfeeding continues to be major public health priority at both state and national levels [9,10].
With women now representing 46 percent of the Australian labour force, it is evident that women make a significant contribution to the national economy [11]. Twenty one percent of previously working Australian women resume employment in the first six months following childbirth, and 42 percent by 12 months [4]. A woman's return to work has repeatedly been found to be a major contributor to the premature cessation of breastfeeding [12][13][14][15][16][17][18][19].
For the employer, the benefits of providing a working environment conducive to breastfeeding outweigh the costs. If breastfeeding is supported in the workplace, women are more likely to return to work, and return earlier, which contributes to women maintaining their job skills, as well as reducing staff turnover [20][21][22]. Women are also more likely to have reduced frequency and length of work absenteeism due to fewer and less severe baby-related illnesses [20][21][22]. Additionally, women are more likely to have higher morale and improved levels of concentration, resulting in increased productivity [20][21][22]. Accommodating breastfeeding mothers may also contribute towards the development of a positive corporate image [21,23].
Workplaces can be an ideal setting for implementing policies and practices to promote and support the continuation and longer duration of breastfeeding [15,21]. Studies indicate that mothers who have convenient access to their infant during the work day, or who expressed breast milk at work, have longer breastfeeding duration than other mothers [24,25]. Support from staff and management, access to facilities to feed their infant or express and store breast milk, flexible hours and lactation breaks have all been identified as crucial elements required by women to successfully continue breastfeeding on their return to work [4,15,17,20,[26][27][28][29][30][31]. A study by Dodgson, Chee and Yap [32] examining workplace breastfeeding policies and practices in Hong Kong hospitals found those hospitals with a committee addressing workplace breastfeeding issues achieved a more supportive environment than those hospitals without. Furthermore, development of a strategic plan and a positive attitude towards breastfeeding at work significantly enhanced the likelihood of policy success [29,33].
The aim of this study was to examine the attitudes, breastfeeding practices and experiences of female employees in an Australian health service workplace who had returned to work from maternity leave, and to investigate their perceived level of organisational support to combine breastfeeding and work. This baseline survey forms a component of an evaluation, using a before and after study design, of a workplace intervention to encourage and support health service employees to combine breastfeeding and work.
The Sydney South West Area Health Service (SSWAHS), Australia looks after all public hospitals and health care facilities (eg early childhood clinics, community health centres) in the central and south-west regions of Sydney, New South Wales (NSW). It covers an area of 6380 square kilometres and contains 35 health facilities, including 11 hospitals. The workforce is predominately female (74%) and in 2007 numbered 23 419 employees [34]. Staff are employed in a diverse range of clinical and non-clinical occupations, including doctors, nurses, allied health, population health, administration, management, information technology, and domestic services. Clinical staff may have less flexibility in their daily workload due to patient demands, which could impact on their ability to take regular lactation breaks.
At the time of the survey, full time and permanent part time health service employees who had completed 40 weeks continuous service were entitled to 14 weeks paid maternity leave and a further period of unpaid maternity leave of up to 12 months from the date of birth of their child. With management agreement, employees could also return from maternity leave on a part time basis until their child reached school age. Staff employed under the Nursing and Midwifery Award were entitled to a 30 minute paid lactation break per eight hour shift. Staff employed under other awards who wished to breastfeed or express breast milk during work time had to do so during their allocated meal breaks or unpaid time. Of the 35 SSWAHS facilities, only two had a dedicated staff breastfeeding room.
Study design
A cross sectional survey using a mailed questionnaire was conducted in central and south-west Sydney, Australia, during November and December 2009.
Study participants
A convenience sample of 998 female employees from SSWAHS who had valid home addresses and had taken maternity leave over the 20 month period from January 2008 to August 2009 were eligible to participate. Eligible participants were identified (including de-identified home addresses) by the SSWAHS Human Resources database. Forty questionnaires were returned due to incorrect addresses and subtracted from the denominator. Completion of the questionnaire was taken as evidence of consent to participate.
Data collection and key measures
A self-complete 59-item questionnaire, including a return prepaid envelope, was mailed to eligible participants in November and December 2009. Two reminder letters were sent to non-respondents, the first approximately three weeks after the initial mail-out and again three weeks later.
The survey comprised of three sections, with participants asked to complete each subsequent section of the survey based on their responses to certain categorical questions. Section 1 consisted of 27 items which all participants were required to complete. In this section, women were asked their age, education level, household income, language spoken at home and employment status using standard questions from the NSW Health Survey [35]. Women were also asked to recall details relating to their most recent maternity leave including parity, length of maternity leave, intention to breastfeed prior to baby's birth, intention to return to work, their awareness of organisational policies or facilities to support breastfeeding at work, and whether they received any information from their employer about options to continue to breastfeed when they returned to work. Women were also asked about their method of infant feeding and whether they returned to work at SSWAHS following maternity leave.
Women who indicated they had returned to work were asked fixed item questions regarding the age of their child when they returned to work, hours worked per week, child care arrangements and open-ended questions on the benefits and difficulties of returning to work.
In section 2, only women who indicated they had breastfed at birth (whether they returned to work or not), were asked 7 fixed-item questions about their breastfeeding practices including duration, intention to breastfeed when returning to work, whether they communicated with management about their breastfeeding and work requirements prior to returning to work, and whether receiving information about organisational support available to combine breastfeeding and work would have influenced their decision to do so. Where relevant, women were also asked two open-ended questions related to reasons for stopping breastfeeding, and reasons for not continuing to breastfeed after returning to work.
In section 3, women who indicated they continued to breastfeed when they returned to work were asked a further 18 fixed-choice questions assessing their perceptions and practices of breastfeeding in the work environment including availability and features of facilities, lactation breaks, and perceived level of organisational and social support. Women were asked to select relevant enablers and barriers to breastfeeding and work from a pre-determined list.
Ethics
The study was approved by the Ethics Review Committee of Sydney South West Area Health Service (Royal Prince Alfred Hospital Zone), Protocol No X09-0216 and HREC/09/RPAH/353.
Analysis
Statistical analyses were carried out using SPSS (Version 15). Relationships between study and outcome variables were examined using a Pearson chi-square test.
Results
Of a total of 998 surveyed, 496 women completed the survey representing a response rate of 51.7 percent. A full report of the results is available from the corresponding author.
Characteristics of respondents
The mean age of women completing the survey was 35 years and the majority were married, of non-Aboriginal descent, and university educated (84%) ( Table 1). Just over a third of women spoke a language other than English (LOTE) at home, which is representative of the local area. The average household income for more than half of the survey participants was high, being A$80,000 (Table 1). Occupation responses were coded into three categories "nurse/midwife", "administration/management" and "clinical/allied health". Almost half (49%) of respondents were a nurse or midwife, 39 percent were in a clinical or allied health role and a further 12 percent in administrative or management roles ( Table 1).
Awareness of workplace breastfeeding policies and facilities
At the time of the survey, women were entitled to maternity leave, flexible work practices and, for women employed under the nursing and midwifery award, a paid 30 minute lactation break per shift. Most respondents were aware of their maternity leave entitlements (90%; n = 446) and to a lesser extent the option of flexible work practices (63%; n = 312). At least half of the midwives responding to the survey were aware of their lactation break entitlement (n = 38) but only 28 percent of general nurses were aware of this entitlement (n = 204). Amongst all women surveyed, awareness of a breastfeeding room was low (14%) reflecting the fact that only two area health facilities had a designated room available at the time.
Ninety seven percent of women indicated that prior to their baby's birth they intended to breastfeed, with 98 percent of women reporting that their infant was "ever breastfed" (n = 419). Breastfeeding rates declined in the first few months, with 13 percent (n = 29) stopping breastfeeding by 12 weeks (3 months), and a further 24 percent (n = 54) by 24 weeks (6 months).
Only five percent of the 493 respondents had been given written or verbal information from their employer about the option to continue breastfeeding upon their return to work. Almost all mothers (95%; n = 470) intended to return to work following maternity leave, with another four percent undecided. At the time of the survey, 52 percent of mothers were working full or part time (n = 253), while 44 percent were still on maternity leave (n = 220). For those women whose maternity leave had finished, 79 percent (n = 327) stated they returned to work at SSWAHS following their maternity leave, indicating a high staff retention rate among the organisation. Most women (86%, n = 300) returned to work part time (less than 35 hours per week), with two thirds returning before their baby's first birthday (less than 6 months of age, 11.1%; 6-11 months of age, 54%).
Breastfeeding intention and practice upon return to work
Returning to work was the second most frequent response to the question "What was the main reason for stopping breastfeeding your child?" Of the 259 women who had reportedly ceased breastfeeding at the time of the survey, 25 percent reported returning to work as the main reason why they stopped breastfeeding (n = 65). Other main reasons included: that the mother felt the infant was ready to stop (31%, n = 80) and that milk supply was insufficient (19%, n = 50).
Of 390 respondents, nearly 60 percent of women (n = 230) had considered the possibility of breastfeeding after returning to work, but only 40 percent (n = 156) continued to do so upon return to work ( Table 2). For those women who had considered breastfeeding on return to work but had not, the main reason given was a lack of breastfeeding/expressing facilities and a lack of workplace and managerial support (44%, n = 66). Other reasons included the infant was ready to stop (19%, n = 29) and that women felt it was too difficult or inconvenient (19%. n = 28). Eight women (5%) had planned to stop breastfeeding upon their return to work. Education level, occupation and income had no effect on intention and practice of breastfeeding, nor did the age of the infant or part-time status when the mother returned to work.
Only eight percent of women (n = 29) had spoken to their manager about breastfeeding prior to returning to work, although nearly 60 percent felt that they "would have been more likely to continue to breastfeed after returning to work" if they had received information and support about this possibility prior to going on maternity leave (n = 205).
Breastfeeding practices at work
When asked "how did you continue to breastfeed after you returned to work?" 36 percent reported combining breastfeeding and expressing milk at work, while only two women (1%) were able to breastfeed their baby during work hours. More than a third of women did not breastfeed or express milk at work, preferring to breastfeed before and after work, and have the infant provided with infant formula while they were away ( Table 2).
When women were asked "When did you breastfeed or express while at work?" the most common response was during allocated meal break times (57%), followed by additional paid lactation breaks (16%), and additional unpaid lactation breaks/own time (14%). Forty two percent (n = 34) found their breaks for breastfeeding or expressing to be inflexible.
Most breastfeeding women used a manual pump (51%) or electric pump (33%) to express, and most used their own pump (83%). Expressed milk was usually stored in the staff refrigerator. Only 20 percent (n = 14) of women expressed or breastfed in a room especially designated for the purpose, with one in four women using their office or another location (54%).
When questioned about the qualities of the room where they breastfed, most women reported it to be clean (87%), within a five minute walk of their work station (90%) and providing access to a power point (83%) and cleaning facilities (79%). However, two thirds stated that the room was not always available when needed and only 57 percent reported it to be private. A comfortable chair was reported to be available in just over half of rooms.
Breastfeeding support at work
Women reported several factors that enabled them to continue breastfeeding whilst at work. Flexible work options (including working part time or reduced hours) (17%, n = 71) and flexibility of break times (11%, n = 40) were the most commonly mentioned enablers. Support from management and colleagues (11% (n = 43) and 13% (n = 51) respectively) were also important, as was access to a private room for breastfeeding or expressing (n = 40). Similar responses were given when women were asked to nominate factors that made it difficult to breastfeed at work. Nearly 20 percent stated inflexible break times and a lack of lactation breaks made breastfeeding difficult (n = 77). Role overload due to multiple demands (n = 60), lack of access to a private room (n = 49) and lack of available information (n = 39) were also issues reported. Support to combine breastfeeding and work came mainly from family and partners. Many women listed the organisation (61%, n = 71) and human resources department (70%, n = 81) as providing no support. Line managers and co-workers offered a varied level of support (Table 3).
Workplace characteristics and intention to, and practice of, breastfeeding at work were examined. Receiving either verbal or written information about breastfeeding was not significantly associated with women's intention to breastfeed or breastfeeding practice (p = 0.29 and p = 0.89 respectively), nor was a high level of support from partner/family (p = 0.97 and 0.68 respectively) or organisation (p = 0.80 and p = 0.66). However, small cell sizes limit the meaningfulness of these analyses. When questioned about their experience of breastfeeding at work, 58 percent found it to be very positive or positive, 23 percent were neutral and 21 percent found it to be a negative or very negative experience. Women were asked to provide a reason for their response regarding their breastfeeding experiences, and comments from the few women who responded included: "It was stressful worrying about needing to express regularly, especially when work was busy. I had to cover a shared office window with a pillowslip for privacy and still was nervous someone would walk in on me." "Flexible hours made it easier to combine the two. Very supportive boss and coworkers. Worked from home and attended work only as required" The only suitable place was a toilet (as it had privacy) but who wants to sit on a toilet for 20 minutes to express?"
Discussion
This baseline study of women employed by an Australian area health service revealed that employees felt largely unsupported by managers and their organisation to continue breastfeeding at work. Support to combine breastfeeding and work came mainly from family and partners with little perceived support from the organisation and human resources. However, there was no statistically significant association between organisational support and intention to, or practice of, breastfeeding at work. The small numbers in subgroup analyses are likely to be a contributing factor to the lack of a clear association, as well as the strong support from family and partners, which may have overshadowed support offered by the organisation.
Returning to work was one of the main reasons women ceased breastfeeding, similar to findings in other studies [5,12,13,15], with 60 percent of women intending to breastfeed, but only 40 percent continuing to do so on return to work. Flexible breaks and work options, as well as access to a private room facilitated breastfeeding at work.
The particular characteristics of this study population, being highly educated women working in an area health service may limit comparability of these results with other workplace studies. A limitation of the study is that no assessment of overall workplace and management support for all staff was made, and it is possible that non-breastfeeding employees may also perceive organisational support to be minimal. The low response rate also limits the generalisability of the study. Ideally, the timeframe between the sample population of women and when the survey was conducted should have been wider to allow the opportunity for more women to respond once their maternity leave was finished and they had returned to work.
Enabling women to continue breastfeeding at work has benefits for the infant, employee and organisation [2,21,22]. However, as evidenced by findings in this study, despite a strong desire to continue breastfeeding, women need greater support in the workplace if they are to successfully combine breastfeeding and work. Organisations need to create a workplace culture that supports and promotes breastfeeding, and institute policies that allow flexible hours, lactation breaks and appropriate facilities to express and store breast milk [17,20,26,29]. This baseline study was conducted prior to the implementation of the SSWAHS breastfeeding policy, and has highlighted areas that require improvement to better support breastfeeding employees. The results of the post policy survey will hopefully reveal greater support for women to breastfeed, resulting in higher rates of sustained breastfeeding among SSWAHS employees.
Conclusions
The transition period of returning to work is a critical time to support the continuation of breastfeeding amongst female employees. Workplaces and employers have a crucial role in providing supportive workplace environments, appropriate facilities, strong management support, and relevant policies in order for women to feel adequately supported and encouraged to continue to breastfeed when returning to work.
List of abbreviations LOTE: Language other than English; NSW: New South Wales; SSWAHS: Sydney South West Area Health Service.
|
2014-10-01T00:00:00.000Z
|
2011-11-30T00:00:00.000
|
{
"year": 2011,
"sha1": "424eb687eb979533fd72cd77a7b051db2a3706c4",
"oa_license": "CCBY",
"oa_url": "https://internationalbreastfeedingjournal.biomedcentral.com/track/pdf/10.1186/1746-4358-6-19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "424eb687eb979533fd72cd77a7b051db2a3706c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259975147
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and Safety of Apatinib in Patients with Recurrent Glioblastoma
Background and Objective Glioblastoma is a cranial malignant tumor with a high recurrence rate after surgery and a poor response to chemoradiotherapy. Bevacizumab has demonstrated efficacy in the treatment of glioblastoma by inhibiting vascular endothelial growth factor, but the efficacy of vascular endothelial growth factor receptor tyrosine kinase inhibitors varies in treating glioblastoma. This single-arm prospective study aimed to explore the efficacy and safety of the vascular endothelial growth factor receptor tyrosine kinase inhibitor apatinib in treating recurrent glioblastoma after chemoradiotherapy. Methods A total of 15 patients with recurrent glioblastoma (2016 World Health Organization grade IV) after chemoradiotherapy were enrolled in this study from September 2017 to September 2019 and treated with apatinib 500 mg once daily. Responses were evaluated according to the Response Assessment in Neuro-Oncology criteria, and adverse events were recorded according to the National Cancer Institute Common Terminology Criteria for Adverse Events Version 4.0. Results The overall response rate was 33.3%, and the disease control rate was 66.6%. The median progression-free survival was 2 months, and the median overall survival was 6.5 months. The apatinib dose was adjusted in seven patients because of adverse events (46.6%). The most common adverse events were thrombocytopenia (53.3%), asthenia (40%), and hand-foot syndrome (33.3%). Conclusions Apatinib might be effective in treating recurrent glioblastoma after chemoradiotherapy in terms of the overall response rate, but the efficacy is not durable and the clinical benefit is limited. The adverse effects of apatinib were acceptable. Clinical Trial Registration ChiCTR-ONC-17013098, date of registration: 24 October, 2017.
Introduction
Glioblastoma (GBM, World Health Organization grade IV) is the most common intracranial malignancy, accounting for half of all primary brain tumors. Glioblastoma is also the most malignant type of glioma and is aggressive and easy to recurrent. Glioblastoma shows a poor response to various treatments, including surgery, radiotherapy, and chemotherapy. The median survival of patients with GBM is 14-16 months, and is only 3-9 months after recurrence [1]. The best supportive care merely achieves a median survival time of 3.1 months. Therefore, effective treatments for recurrent GBM are still lacking [2].
Glioblastoma is one of the most richly vascularized tumors in the central nervous system, and hence can express a variety of specific tumor angiogenesis regulators, including hypoxia-inducible factor-1 subunit alpha, angiopoietin 1/2, transforming growth factor β1, platelet-derived growth factor, and fibroblast growth factor [3]. Most of these regulators exert an angiogenic effect through the vascular endothelial growth factor (VEGF) pathway. The expression of VEGF in the tumor tissues is closely related to the grade and prognosis of glioma [4].
Bevacizumab binds to VEGF and inhibits the activation of the angiogenesis pathway. It was approved for treating GBM by the US Food and Drug Administration in 2009. The addition of bevacizumab to the standard regimen of temozolomide and radiotherapy improved the progression-free survival (PFS) of treatment-naïve GBM by 3.4-4.4 months [5]. For recurrent GBM, additional use of bevacizumab improved the PFS by 2.7 months compared with lomustine alone [6]. The overall safety of adding bevacizumab to the treatment of GBM is acceptable. A meta-analysis of 480 patients with GBM showed the most frequent adverse events (AEs) associated with bevacizumab were asthenia, headache, diarrhea, and hypertension [7]. However, treatment interruption because of AEs was 20% in patients treated with bevacizumab plus chemotherapy, which is significantly higher than the 5.5% in patients treated with bevacizumab alone [7].
Apatinib, also known as rivoceranib, is a novel smallmolecule drug approved in China for the treatment of advanced or metastatic gastric cancer. It blocks signal transduction by binding to VEGF through the intracellular ATP-binding site of the tyrosine receptor, thereby inhibiting tumor angiogenesis. Compared with VEGF antibody drugs, apatinib has a stronger inhibitory effect on the VEGF pathway in vitro [8]. Several case reports first demonstrated the efficacy of apatinib in treating recurrent GBM [9,10]. The following small-scale studies reported further promising results. A clinical study reported a PFS of 8.3 months in nine patients with recurrent high-grade glioma who were treated with apatinib (500 mg once daily [qd]) concurrently with irinotecan (340 mg/m 2 or 125 mg/m 2 every 21 days) [11]. Another clinical study reported an objective response rate (ORR) of 45% and a PFS of 6 months in 20 patients with recurrent GBM who were treated with apatinib (500 mg qd) and temozolomide (100 mg/m 2 , 7 days on with 7 days off) [12]. Similar results of overall survival (OS) of 9.1 months and a disease control rate of 82.3% were achieved in 18 patients with recurrent high-grade glioma who were treated with apatinib (500 mg qd) and temozolomide (50 mg/m 2 qd) [13].
However, combined therapy will inevitably increase the adverse reactions of patients with recurrent GBM with a poor performance status. It is not clear whether apatinib alone is equally treatment effective and has lower adverse reactions. Currently, only a few case reports and small sample size retrospective studies indicated that apatinib might be effective in the treatment of recurrent glioma [14]. Our prospective study aimed to further evaluate the efficacy and safety of apatinib monotherapy in treating recurrent GBM after chemoradiotherapy.
Patients
The inclusion criteria were as follows: aged 18-75 years; GBM (2016 World Health Organization grade IV) confirmed by surgical pathology; at least one intracranial target lesion defined according to the Response Assessment in Neuro-Oncology (RANO); failed previous intracranial lesion radiotherapy and temozolomide-based chemotherapy regimens (progressive disease confirmed with clear imaging evidence during treatment or within 6 months after treatment); no indications for reoperation; and Eastern Cooperative Oncology Group performance status scores of 0-2.
The exclusion criteria were as follows: resistant hypertension; severe cardiovascular diseases; abnormal blood coagulation or current gastrointestinal bleeding; major surgery in the previous 3 months; participation in other clinical trials in the previous 3 months; and renal insufficiency or liver dysfunction.
Written informed consent was obtained from all participants before enrollment. The study protocol was approved by the Ethics Committee of Fudan University Huashan Hospital. The study was conducted in compliance with Good Clinical Practice guidelines and the Declaration of Helsinki (Clinical Trial Registration No. ChiCTR-ONC-17013098).
Study Design
This single-center, prospective, single-arm clinical trial was performed to preliminarily investigate the efficacy and safety of apatinib mesylate in patients with recurrent GBM after surgery, radiotherapy, and temozolomide chemotherapy.
Procedure
Patients were treated with apatinib at an initial dose of 500 mg/day until progression, death, or serious AEs. In the case of an adverse reaction ≥ grade 3, the drug was discontinued until the adverse reaction returned to grade 1 and then was resumed from 250 mg qd orally. If an adverse reaction of grade 3 or above occurred again, the drug was withdrawn. Molecular pathological examinations were performed at the Huashan Hospitals' pathology departments following primary surgery. Methyl-guanine methyl transferase promoter methylation status was evaluated by polymerase chain reaction and verified by quantitative pyrosequencing. Mutations in the IDH1 gene were investigated by Sanger sequencing. Ki67 was evaluated by immunohistochemistry. A blood test was completed at the College of American Pathologists-certified Central Laboratory of Huashan Hospital.
Endpoints
The primary endpoint was ORR. The secondary endpoints were PFS, OS, and AEs. Brain cranial magnetic resonance imaging was performed after apatinib treatment and every 1 month or when there were significant signs of progression. Magnetic resonance imaging was used for examining the lesions, and complete remission (CR), partial response (PR), stable disease (SD), and progressive disease (PD) evaluations were performed according to the RANO criteria. The ORR was calculated as (CR + PR)/total number of patients × 100%. The disease control rate was calculated as (CR + PR + SD)/total number of patients × 100%. Progression-free survival was the time from the start of treatment to PD or death. Overall survival was defined as the time from the start of treatment to death. Adverse events were evaluated according to the National Cancer Institute Common Terminology Criteria for Adverse Events Version 4.0, and monitored by consultation.
Statistical Analysis
The PFS and OS were estimated by the Kaplan-Meier method. The corresponding two-sided 95% confidence intervals were calculated via the Brookmeyer-Crowley method. All analyses were performed with SPSS version 25.0 (IBM Corporation, Armonk, NY, USA).
Clinical Characteristics of the Patients
A total of 15 patients with local recurrent GBM were enrolled into the study from September 2017 to September 2019. The clinical characteristics of these patients are shown in Table 1. Female patients accounted for 26.6%. The median age was 52 years. Patients with an Eastern Cooperative Oncology Group performance status score of 2 accounted for 60%. The average duration of treatment was 3.4 months. The median time between the first surgery and recurrence was 7 months (range 2-27 months). All patients had completed the Stupp protocol. Dexamethasone was used at enrollment and during the treatment of apatinib in five patients with a dose of 5-7.5 mg/day.
Molecular Features of the Tumors
The isocitrate dehydrogenase 1 mutation was found in only one patient. Promoter methylation of the methyl-guanine methyl transferase gene was found in seven patients. The median of the Ki-67 labeling index was 30% (range 10-60%).
Efficacy
As of 21 July, 2020, data of 15 patients were available for analysis. Partial response and stable disease were achieved in 5 of the 15 patients, respectively. The treatment achieved an ORR of 33.3%, a disease control rate of 66.6%, a median PFS of 2 months (95% confidence interval 1.0-2.9), and a median OS of 6.5 months (95% confidence interval 3.6-9.3). Figure 1 shows the Kaplan-Meier OS and PFS curve. The pre-and post-treatment cranial magnetic resonance imaging scans of two patients with good response to apatinib are shown in Fig. 2.
AEs
The AEs included thrombocytopenia in eight patients, asthenia in six, hand-foot syndrome in five, elevated liver enzymes in five, hypertension in four, leukopenia in four, oral ulcers in two, and proteinuria in one. Dose adjustment was required for a total of seven patients because of thrombocytopenia ≥ grade 3 in two patients, hand-foot syndrome ≥ grade 3 in three, and leukopenia ≥ grade 3 in two.
Discussion
Anti-tumor angiogenesis therapy plays an important role in tumor treatment. Bevacizumab binds to VEGF-A and inhibits its binding to VEGFR-2, thereby inhibiting angiogenesis and tumor growth. Additional chemotherapy can further improve the anti-tumor effect of bevacizumab. This is also true in treating recurrent GBM. Bevacizumab plus irinotecan has better efficacy in treating recurrent GBM, but with more AEs, including gastrointestinal reactions and bone marrow suppression [7]. Vascular endothelial growth factor receptor tyrosine kinase inhibitors (TKIs) are novel anti-angiogenic drugs generally used in treating advanced tumors. They are multi-target TKIs that potently inhibit tumor angiogenesis pathways compared with anti-angiogenic antibody drugs. Vascular endothelial growth factor receptor TKIs have demonstrated significant inhibitory effects on a variety of solid tumors, including lung cancer, gastric cancer, bowel cancer, and liver cancer. Multiple VEGFR-TKIs have been used to treat patients with recurrent GBM.
A phase III clinical study showed a response rate of 56% for cediranib in treating recurrent GBM, with a 6-month PFS rate of 26% [15]. Sunitinib and other VEGFR-TKIs were not quite effective in treating recurrent GBM, with a 6-month PFS rate of only 10.4%. Sorafenib had a 7.9-month PFS and a 17.8-month OS in treating recurrent GBM in phase I clinical trials, but serious toxic effects were observed [16,17]. Why the efficacy of sunitinib, cediranib, and sorafenib varies in patients with recurrent GBM is still unclear.
Both sunitinib and cediranib are multi-target receptor TKIs with inhibitory effects on VEGFR 1/2/3 and platelet-derived growth factor receptor pathways. Sorafenib is also a multitarget receptor TKI with inhibitory effects on VEGFR-2, c-Kit, fibroblast growth factor receptors, and the FLT-3 pathways. Recent studies have shown that VEGFR-1 is mainly responsible for the positive regulation of monocyte and macrophage migration, VEGFR-3 is mostly related to the formation of lymphatic vessels, while VEGFR-2 plays a primary role in tumor angiogenesis and therefore is the main target in treating tumor angiogenesis.
Sunitinib, cediranib, and sorafenib are not the same in terms of inhibitory activity against VEGFR-2 tyrosine kinase, with a half-maximal inhibitory concentration of 9 nM, 90 nM, and 5 nM, respectively, clearly indicating that cediranib has the strongest inhibitory effect on VEGFR-2 kinase. This partially explains why cediranib is the most effective agent in treating recurrent GBM [18,19].
Apatinib is a compound derived from the small-molecule VEGFR-TKI PTK787 (vatalanib). It is chemically known as methane sulfonate N- [4-(cyanocyclopentyl) phenyl] [2-[(4-pyridinylmethyl)amino] (3-pyridine)]formamide, with a molecular formula of C25H27N5O3S and a molecular weight of 493.58 (methane sulfonate). Pharmacodynamic studies have shown that apatinib can inhibit the activity of VEGFR-2 tyrosine kinase to block the signal transduction after binding to VEGF through the intracellular ATP-binding site of the protein tyrosine receptor, thereby inhibiting tumor angiogenesis. Additionally, apatinib can effectively inhibit VEGFR-2 at a very low concentration, with a capacity of binding to VEGFR-2 that is more than ten times that of PTK787 as shown in the activity assay. With regard to the inhibitory activity against VEGFR-2 tyrosine kinase, apatinib has a half-maximal inhibitory concentration of 2 nM and is stronger than cediranib. The drug was approved by the China Food and Drug Administration in October 2014 for the third-line or later-line treatment of advanced metastatic gastric cancer/adenocarcinoma of the esophagogastric junction. Apatinib treatment significantly prolonged median PFS and OS compared with placebo (PFS 2.6 months vs 1.8 months; OS 6.5 months vs 4.7 months; ORR 2.8% vs 0%) [20].
As a novel agent, apatinib was previously applied to treat patients with recurrent GBM in China. Individual case reports indicated that apatinib monotherapy was effective in treating recurrent GBM [9]. In this prospective study, a remission rate of 33.3% and a disease control rate of 66.6%, along with a PFS of 2 months and an OS of 6.5 months, were established for apatinib monotherapy in treating advanced GBM after chemoradiotherapy. The PFS and OS data of this study were inferior to previous retrospective studies [14]. The incidence of AEs was slightly higher [14]. We believe the efficacy of apatinib in the treatment of recurrent GBM is worse than that of bevacizumab and temozolomide when comparing with historical data [21].
Conclusions
In this prospective research, we showed that apatinib might be effective in treating recurrent GBM after chemoradiotherapy in terms of the ORR, but the responses were not durable and the clinical benefit is limited. The adverse reactions, especially thrombocytopenia and hand-foot syndrome, need attention and prevention.
Some limitations could not be ignored in this research, especially the small sample size, single-central design, lack of a control group, and some potential biases. Further randomized controlled clinical studies are necessary to verify this finding.
|
2023-07-20T06:17:46.142Z
|
2023-07-19T00:00:00.000
|
{
"year": 2023,
"sha1": "21e48a7e7168764aa7fc2a78ea29cb873ee60ddc",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40268-023-00429-3.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "47a3c39b9ad07066e4801417d583689aef76d3fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
28664254
|
pes2o/s2orc
|
v3-fos-license
|
The effect of family policies and public health initiatives on breastfeeding initiation among 18 high-income countries: a qualitative comparative analysis research design
Background The objective of this study is to examine the effects of macro-level factors – welfare state policies and public health initiatives – on breastfeeding initiation among eighteen high-income countries. Methods This study utilizes fuzzy-set Qualitative Comparative Analysis methods to examine the combinations of conditions leading to both high and low national breastfeeding initiation rates among eighteen high-income countries. Results The most common pathway leading to high breastfeeding initiation is the combination of conditions including a high percentage of women in parliament, a low national cesarean section rate, and either low family spending, high rates of maternity leave, or high rates of women working part-time. The most common pathway leading to low breastfeeding initiation includes the necessary condition of low national adherence to the Baby-Friendly Hospital Initiative. Conclusion This research suggests that there is a connection between broad level welfare state polices, public health initiatives, and breastfeeding initiation. Compliance with the WHO/UNICEF initiatives depends on welfare regime policies and overall support for women in both productive and reproductive labor.
Background
Breastfeeding is the optimal source of infant nutrition and has been linked to positive health outcomes in early childhood and into adulthood [1][2][3]. In Western, high-income Organisation for Economic Cooperation and Development (OECD) countries, however, breastfeeding initiation ranges from 44% to 99% [4] This wide range raises questions about what enables or constrains breastfeeding in different national settings. The purpose of this paper is to examine the case of both public health efforts and welfare state policies as explanations of the breastfeeding variation among eighteen highincome, OECD countries.
Sociologists and economists have only recently begun to study breastfeeding as an outcome of social and political processes in both welfare state and public health research. Galtry [5] examined the effects of labor market policy and socio-cultural factors on breastfeeding rates in three high-income, OECD countries: Sweden, the United States, and Ireland. Cattaneo et al., [6] examined the current situation of breastfeeding in 29 European Union member-states and affiliates, in advance of the European Union (EU)-funded "Promotion of Breastfeeding in Europe" project. The latter authors found that labor market policies are a key predictor of breastfeeding rates in the EU countries. Cattaneo et al. [6] also noted that "different social and cultural determinants, as well as flawed policies and unequal support among and within health-care systems, could also explain differences in breast-feeding rates. But it is definitely difficult to understand why initiation and duration of breastfeeding vary so much, and more comparative research is needed" [6 (pg 43)]. Following that recommendation from Cattaneo et al., the research presented in this paper contributes to the small but growing body of comparative research on breastfeeding, labor market policies, and public health policies by analyzing breastfeeding outcomes in 18 high-income OECD countries.
The World Health Organization (WHO) Global Strategy for Infant and Young Child Feeding examines the four WHO/United Nations Children's Fund (UNICEF) recommendations for protection and promotion of breastfeeding worldwide [7]. These recommendations follow that of the Innocenti Declaration, a 1990 WHO/UNICEF declaration that stated all member countries should adhere to four policies, including developing national breastfeeding policies and coordinators, ensure that all hospitals follow the Ten Steps to Healthy Breastfeeding, ensure all member-states follow the International Code of Marketing of Breast-Milk Substitutes, and ensure all member-states enact legislation to protect the rights of working mothers [7].
Yngve and Sjostrom [8] performed a comprehensive study on the degree to which EU member-states complied with the WHO/UNICEF recommendations. They found vastly differing breastfeeding rates, along with varying degrees of compliance with the Innocenti Declaration. For example, they found that few countries have both a national breastfeeding coordinator and a national plan of action on breastfeeding, and that there is a great deal of variation in both initiation and duration of breastfeeding, even within similarlysituated countries. In addition, they found that breastfeeding rates are difficult to compare country to country because of varying definitions and operationalization of breastfeeding, including what is considered "exclusive" breastfeeding [8]. Finally, they found that there is very little consistency in reporting of breastfeeding statistics in many countries because, unlike demographic characteristics like fertility, breastfeeding rates are not regularly reported and catalogued [8].
Cattaneo et al. [6] built upon Yngve and Sjostrom's study, finding much of the same variation. In 2003, they distributed a questionnaire to key personnel in 18 countries: the 15 EU then-member states and Norway, Iceland, and Sweden. The questionnaire asked different actorsgovernmental agencies, public health institutions, and NGOsto report the state of breastfeeding in their countries [6]. The survey asked about adherence to the Baby-Friendly Hospital Initiative, adherence to the International Code of Marketing of Breastmilk Substitutes, the degree to which volunteer groups are active in breastfeeding support, and the rates of both exclusive and complementary breastfeeding. Again, the survey found wide variation in adherence to public health initiatives across countries, as well as a great degree of variation in breastfeeding initiation and duration [6].
Theoretical contributions
One of the theories this paper examines with respect to breastfeeding variation is family policies within broader welfare state theories. Family policies, broadly defined, refer to any regulation of social and economic life relating to both families and/or the interaction between families and other social institutions [9,10]. Examples of family policies include subsidized childcare, public spending on family benefits, and paid parental leave. Esping-Andersen [11] nested family policies within a broader welfare state classifications system based on differing arrangements of markets, the state, and families. Esping-Andersen [11] identified three welfare state regimes: social-democratic, conservative, and liberal. The social-democratic welfare regime, present in most Scandinavian countries, offers a system in which vast social protections are extended to working-class and middle-class families, and the state provides a variety of family supports to all citizens [11]. Conservative welfare states, including Germany, Austria, and Italy, provide some social supports and financial benefits to mothers, but fewer universal benefits to all citizens. Thus, they offer benefits that exclude non-mothers. Also, the level of support for family work in the form of subsidized or public day care is very limited [11]. Countries in the conservative regime offer long leaves that could encourage motherhood, but then effectively keep mothers out of the labor market because of their low level of support for child care. As a result, they present reproductive labor and productive labor as an either/or choice for women. The liberal regime includes the United States, the United Kingdom, and Canada. In the liberal regime, the state provides very few social supports and financial benefits to families, instead leaving those supports and benefits up to the markets [11]. The few state benefits that are available are typically means-tested and restricted to individuals with great need. Thus, the liberal regime does not support reproductive labor but rather takes a laissez-faire approach.
Welfare state policies provide a useful lens through which to examine breastfeeding outcomes. However, they are not the only predictive factor. Public health policies, including the application of international-level initiatives, have an important role in breastfeeding outcomes [12,13]. More specifically, the ways in which individual countries have adopted policy recommendations from the World Health Organization (WHO) and the United Nations Children's Fund (UNICEF) affect the public health climate which either supports or discourages breastfeeding. These policy recommendations, however, do not operate independently from overall welfare state regimes or family policies. In fact, the degree to which welfare states acknowledge the value of care as a social good, and support that care, directly influences the degree to which externally-developed public health recommendations are implemented.
The Baby-Friendly Hospital Initiative (BFHI) is a component of the four-part Innocenti Declaration. Myriad research has shown that the BFHI has a significant, positive effect on breastfeeding initiation. For example, Kramer et al. [12] conducted a landmark Promotion of Breastfeeding Intervention Trial (PROBIT) in the republic of Belarus. They found that in a randomized trial of BFHI interventions in maternity hospitals in Belarus, infants who received the interventions were more likely to breastfeed at 12 months than infants in non-BFHI facilities [12].
Similarly, Merten, Dratva, and Ackermann-Lebrich [13] studied breastfeeding duration among mothers who gave birth in both Baby-Friendly hospitals and non-Baby-Friendly hospitals in Switzerland in 1994 and 2003. They found that breastfeeding duration had been increasing overall, but women who gave birth at Baby-Friendly hospitals experienced greater gains in breastfeeding duration than those who did not. In Norway, many hospitals were transitioning to the WHO's recommended "Ten Steps to Health Breastfeeding" even before the BFHI was developed. Endresen and Helsing [14] found that Norway's hospitals and maternity wards were overwhelmingly following the Ten Steps as early as 1991. In fact, breastfeeding at 12 weeks in Norway was greater than 80% in 1991, compared to only 30% in 1968 [14].
It begs the question, then, how an intervention like the BFHI can be combined with other policy initiatives to increase rates of breastfeeding. Lutter and Morrow [15] examined the degree to which the implementation of the WHO/UNICEF Global Strategy for Infant and Young Child Feeding affects exclusive breastfeeding rates over a 10 to 20 year period in various countries. The Global Strategy includes nine operational targets relating to worldwide breastfeeding, and includes the reaffirmation of goals of the Innocenti Declaration [15]. The WHO developed the World Breastfeeding Trends Initiative (WBTi) as a tool to assess national practices and policies [15]. Lutter and Morrow found that countries who implement and adhere to the WHO Global Strategy have shown improvements in exclusive breastfeeding [15].
Perez-Escamilla and Moran tackle the challenge of scaling up effective breastfeeding interventions to a national level. They rely heavily on the theory of Complex Adaptive Systems (CAS) approach of recognizing the interconnectedness of the variety of agents and resources needed to successfully implement health programs at a national level [16,17]. As breastfeeding policy is well-suited for this CAS lens, it is valuable to understand the intricacies and connections between the WHO/UNICEF Global Strategy and the broader welfare state structures those policies must operate in. As such, the analysis in this paper combines an understanding of both implementation of pieces of the WHO/UNICEF Global Strategy and the welfare state policies and characteristics of high-income countries. The specific WHO/UNICEF policies this analysis examines are the participation in the BFHI and maternity protections for women, measured as weeks of paid maternity leave guaranteed to women. These two indicators are used because of the availability and comparability of the measures over the country selection.
To examine the effects of both public health initiatives and welfare state policies on breastfeeding initiation among eighteen high-income countries, fuzzy-set Qualitative Comparative Analysis methods was used. The goal of the current research was to examine the combinations of conditions leading to both high and low national breastfeeding initiation rates. Table 1 displays the countries used in the analysis as well as the breastfeeding initiation rates, on or about 2005. The country selection is based on prior research and the availability of comparable breastfeeding data. Note that data were not available for each country for the same year. Data availability on breastfeeding, in fact, is a common problem throughout comparative breastfeeding literature. In keeping with previous research, this paper includes breastfeeding initiation information on or around 2005. Table 1 displays the exact year and source. In keeping with similar studies of breastfeeding, child health outcomes, and maternity leave [18,19] only high-income, OECD countries were included in order to minimize variation in other influences on breastfeeding rates, including cost of commercial formula, availability of clean water, and access to commercial formula. The country selection was also driven by research by Catteneo et al. [6,20] who were among the first to perform a comprehensive study of breastfeeding outcomes and adherence to WHO initiatives to increase breastfeeding rate.
Sample selection
The eighteen countries included in the analysis represent a wide range of geography, economy, and culture, but are all members of the Organization for Economic Co-operation and Development (OECD), a global organization made up of 34 member-states dedicated to "promote policies that will improve the economic and social well-being of people around the world" [21].
Outcome variable
Breastfeeding initiation refers to the percentage of women who ever breastfed, even just once. Most countries collect data on breastfeeding initiation, even if they use different data collection procedures. The OECD complies these data in their cross-national Family Database [4]. Initiation ranges from 99% to 47.7% among the countries in this analysis.
Explanatory variables
Percentage of Baby-Friendly hospitals is operationalized as the percentage of hospitals with maternity wards that are certified as meeting the WHO/UNICEF criteria for baby-friendly, meaning they follow the Ten Steps to Healthy Breastfeeding.
Weeks full-time equivalent paid maternity leave addresses the WHO/UNICEF recommendations for maternity protections for women [7]. Maternity leave policies vary in both scope and breadth across the countries in the analysis. Broadly speaking, many countries have "parental leave" policies that can be either maternal leave, specifically to be taken by the mother, paternal leave, specifically to be taken by the father, or parental leave, which can be taken by either the mother or the father [22]. Maternity leave generosity is measured in two ways: the amount of time mothers can take from work without fear of losing their job, and the amount of compensation that women receive as a portion of their salary while they are taking leave. Most countries offer both: job-protected leave that is paid at a portion of the salary. One notable exception is the United States, which provides job-protected leave only [23]. Maternity leave is an important predictor of breastfeeding outcomes cross-nationally because mothers in countries with generous maternity leave policies may have more opportunity to spend time at home with their infants, thus reducing the opportunity costs and any financial penalty associated with breastfeeding [5,18,24,25]. This analysis follow Ray [22] and Ray, Gornick, and Schmitt [23] and operationalize maternity leave as the number of weeks of full-time equivalent paid leave. The number of weeks of full-time equivalent (FTE) leave is calculated by multiplying the wage replacement rate by the number of weeks of job-protected leave. For example, if a country provides women with 50 weeks of leave, paid at 50% of her salary, the number of weeks FTE maternity leave would be 25.
Female part-time employment rate is calculated as the percentage of employed women between the ages of 15 and 64 who are working part-time. This analysis uses the International Labour Organization definition of less than 30 h per week [26]. Examining the female part-time employment rate will help to operationalize a key part of the theorythat women in the one and a half breadwinner welfare regime are pinched at both ends.
Cesarean section rate is operationalized as the percentage of women delivering via Cesarean section, whether planned or unplanned. Cesarean section rate is rising among high-income countries, and is contraindicated to several of the Ten Steps to Healthy Breastfeeding [7]. For example, women who give birth via Cesarean section often will not have initial skin-to-skin contact with their infant until after the recommended one hour time period [27]. The rise of Cesarean sections and the associated consequences of late breastfeeding initiation require the addition of Cesarean section rate as a predictor variable.Public Spending on Healthcare. This variable is operationalized as the "sum of public and private health expenditure. It covers the provision of health services (preventive and curative), family planning activities, nutrition activities, and emergency aid designated for health" [28].
Public spending on family benefits is operationalized as a percentage of the country's overall GDP. As part of a welfare state model, many countries provide families with various subsidized services or cash transfers [4]. Subsidized services include low-cost childcare, provisions for needy families, and early childhood education [4]. Specifically in this analysis, "family benefits" are divided into three categories: (1)Child-related cash transfers to families with children, including cash allowances for having children, public income payments during maternity and paternity leave, and income support for single parent families [4] (2)Public spending on services for families with children, including financing and subsidizing childcare and early childhood education, public spending for residential facilities, and spending on family services such as home care for needy families [4] (3)Tax breaks for families, which include tax exemptions, child tax allowances deducted from gross income, and child tax credits [4] Percentage of women in parliament. is operationalized as simply the percentage of women in either the parliament or the upper house of a bicameral system during the given year. Kittilson [29] and others [30,31] find that women in governmental positions tend to drive votes on issues such as maternity leave, welfare reform, and other women-specific legislation [32].
Finally, this analysis controls for fertility rate, operationalized as "the number of children that would be born to a woman if she were to live to the end of her childbearing years and bear children in accordance with current age-specific fertility rates" [33]. Previous research examining welfare state policies and child health outcomes have controlled for fertility rate [18,19]. Ruhm [19] has suggested that controlling for fertility rate allows the researcher to look at economies of scale; that is, cost advantages that are provided when more of the product is produced.
Analysis: Fuzzy-set qualitative comparative analysis (fsQCA) Qualitative Comparative Analysis (QCA) is a method developed by Ragin that provides an analytic tool to bridge the gap between case-oriented and variableoriented research [34]. Case-oriented research involves in-depth, small-n studies where many aspects of an individual case are examined and individual cases are compared and contrasted. Variable-oriented research falls on the other end of the spectrum, and is most common in survey-type research, where researchers examine one or two dependent variables and attempt to explain as much variation as possible using large datasets [34]. QCA seeks to "examine similarities and differences across many cases while preserving the integrity of cases as complex configurations [37 (pg 38)]. QCA uses the logic of set relations to address causal complexity and causal configurations in comparative research. Causal complexity is often present in social researchthis means that outcomes do not arise from a single source, or cause, but rather by a combination of conditions that operate with each other to produce the outcome. In large, variable-oriented analysis, this type of complexity can be approximated by using interaction terms, but interaction terms fail to capture any nuance. That is, interaction terms assume the interaction is multiplicative, when in reality, two causes just may need to be present together, in any form, to produce the outcome [34,35].
Necessity and sufficiency are two terms that are important in a QCA analysis. A necessary condition is one in which the causal condition is present in all instances of the outcome. A sufficient condition is one that, when present, always leads to the outcome; that is, it is sufficient on its own to produce the outcome, but doesn't necessarily have to be present in all instances of the outcome [34].
Once necessity and sufficiency are determined, the QCA researcher can identify the results, keeping in mind simplifying assumptions. Simplifying assumptions take advantage of the fact that there is limited diversity in causal configurations, meaning that some of the logically possible configurations actually don't exist in the data [34]. The pathways that remain after the simplifying assumptions are taken into account then form the basis of the explanations of causal complexity. These pathways are best explained in set-theoretic logic; for example, high membership in a set of one causal condition combined with low membership in the set of a second causal condition may lead to an outcome.
Calibration of fuzzy sets
In a fuzzy-set QCA analysis, membership of cases in the fuzzy set specification are calibrated by the researcher. Fuzzy sets represent a fine-grained degree of membership in the set, and fuzzy set scores for each case range from 0 to 1. Calibration is based on theoretical considerations, prior research, and conceptualization of membership in the set. A fuzzy set score of 0.05 is full non-membership in the set, a score of 0.5 is the crossover point, and a score of 0.95 is full membership in the set. It is important to note that fuzzy sets are not simply turning a variable into a continuous variablethey are instead used to determine degree of membership in a set, and the construction of fuzzy sets is based on theory and prior research [34]. Table 2 shows the criteria used for the fsQCA analysis in the current study.
Recipes
In evaluating QCA solutions, both coverage and consistency are important. Coverage, scored from 0 to 1, refers to how much of the outcome is explained by each solution term and by the entire solution. The total solution coverage is the proportion of the membership in the outcome variable that can be explained by the complete solution (all individual causal "recipes") [36]. Raw coverage is the proportion of the membership in the outcome that can be explained by each causal recipe in the solution. Raw coverage can include cases that are covered by more than one solution. Unique coverage, on the other hand, is the proportion of cases in the sample that are only covered by the one solution. Of note is the terminology "causal conditions." In QCA, a "causal condition" is the preferred term over "independent variable" due to the principles of logic and Boolean algebra used in the analysis [37]. Schneider and Wagemann note that "terminological differences can sometimes lead to stylistic problems and substantive confusion," thus necessitating a note n the manuscript ( [37], pg 20).
Consistency in the QCA solutions refers to the degree to which membership in the solution is a subset of membership in the outcome [36]. It also is measured from 0 to 1. A case is considered consistent with each solution term if membership in the solution term is less than or equal to membership in the outcome [36]. For the whole solution consistency, we measure the degree to which membership in the set of solution terms is a subset of membership in the outcome.
The key is to balance consistency and coverage; a solution consistency score of 0.8 is considered meaningful, and a consistency score of 0.9 is highly significant [38]. The QCA algorithm requires the user to specify the criteria used to exclude and code configurations so that logically irrelevant conjunctions are eliminated [39]. In this analysis, 0.8 was used as the threshold for consistent subsets of the outcome.
Results
Note that because there are no necessary conditions for high breastfeeding initiation at the national level, there are several very different pathways that lead to membership in the set of countries with high breastfeeding initiation.
Outcome variable: Membership in the set of countries with high breastfeeding initiation As Table 3 shows, there are six pathways leading to membership in the set of countries with high breastfeeding [44] Note 2: the tilde (~) symbol before a variable indicates that there is low adherence to that particular variable Note 3: The asterisk symbol (*) means "the combination of" the associated conditions initiation. The solutions can be displayed graphically in three models (Figs. 1, 2 and 3): The first three solutions (Fig. 1), including Denmark, Finland, The Netherlands, Norway, and Sweden all involve countries with both a high level of women in parliament and a low cesarean section rate. The set of Scandinavian countries, Denmark, Finland, Norway, and Sweden, all with very high breastfeeding initiation, also share in common high levels of maternity leave. The fourth solution for high breastfeeding initiation includes Canada, Italy, Portugal, and Spain. These countries share a very different set of characteristics; they have low family spending but a low female part-time employment rate. The fifth solution again clusters different sets of countries. Australia, Austria, and New Zealand all have a high female part-time employment rate, high family spending, but low levels of maternity leave. Finally, Japan has high breastfeeding initiation but fits into a solution by itself, with low family spending, high maternity leave, and a low Cesarean section rate (see Fig. 4 and Table 4 for fuzzy set scores).
Outcome variable: Membership in the set of countries with low breastfeeding initiation As evidenced in Table 5, there are two pathways in the dataset that leads to low breastfeeding initiation. Figure 5 displays this solution.
France and Ireland fit into the two solutions for low breastfeeding initiation. In both France and Ireland, there is a low level of women in Parliament, high family spending, high levels of maternity leave, and low adherence to the Baby-Friendly Hospital Initiative. France has a low female part-time employment rate, and Ireland has a high cesarean section rate.
Discussion
The fsQCA results shed light on the combinations of conditions that lead to both high breastfeeding initiation and low breastfeeding initiation. When looking at high breastfeeding initiation rates, seven countries fit the pathway that includes a high percentage of women in parliament. In the fsQCA analysis, countries in the subset with high female representation in parliament also are in the set of countries with high breastfeeding initiation. The specific variable of women in parliament helps to cluster some of the countries. For example, all of the Scandinavian countries -Denmark, Finland, Norway and Swedenas well as Austria, the Netherlands, and Spain, have both high female representation in parliament and high breastfeeding initiation rates. In fact, of all the countries with a high percentage of women in parliament, only Belgium does not also have high breastfeeding initiation. This suggests that having high representation of women in the governing body of a country may facilitate female-centered outcomes, notably in this case breastfeeding initiation. This follows existing theory on women in parliament, because countries that are more female-friendly in other ways may have both more female representation in governmental roles and be more accommodating of breastfeeding. In looking at the research question underlying the current studyhow do state supports and public health initiatives work together to produce a climate conducive (or not) to breastfeeding?it makes theoretical sense that having women in parliament will be a significant pathway to high breastfeeding initiation, because carework is more likely to be valued by a government with a higher percentage of women. This raises the possibility that a third factor could be driving the interaction, which is a consideration in future research.
In the set of countries with high breastfeeding initiation, there are several very different pathways. One particularly noteworthy finding is that four of the six pathways include low cesarean section rate. This supports research by Rowe-Murray and Fisher [27] who found that cesarean section birth increases the time between birth and skin-to-skin contact between mother and infant, which is detrimental to early breastfeeding success.
Of particular note in looking at the set of countries with high breastfeeding initiation is that the BFHI was not a significant predictor. However, when looking at the negative outcomethe set of countries with low breastfeeding initiationthe BFHI was a significant factor in the solution. For the small number of countries in this study with low breastfeeding initiation rates, having low participation in the BFHI is a necessary condition for having low breastfeeding initiation, meaning that while high participation may not be the driving force for high breastfeeding outcomes, low participation is a negating factor. This suggests that in countries that want to increase breastfeeding initiation, the BHFI may give countries the needed extra "push" to boost breastfeeding initiation to a point where other structural factors can play a larger role. Participation in the BFHI may assume that countries value carework and recognize breastfeeding as a public good.
The fsQCA analysis also allows for an examination at the subset of countries with low breastfeeding initiation because of the asymmetry of set theory. In looking at the set of countries with low breastfeeding initiation, both a policy-level component and a public health-level component were missing. In looking at France and Ireland, they are both part of the conservative welfare regime, which supports the model of the male breadwinner and female caregiver. As such, both countries have high levels of support for female reproductive labor, but do not have a high level of female representation in government, which, it could be argued, demonstrates an overall lack of female-friendliness. This suggests that countries whose polices do not support carework may also not have the ability to develop and implement effective women-centered public health policies. That is, public health policies may require a strong support of women and carework in the welfare state policies before they can operate at the institutional level. The results of the fsQCA analysis also build upon the work of Lutter and Morrow in their analysis of WBTi assessments. This analysis combines some of the WBTi indicatorsspecifically BFHI, and maternity protection, along with a broader context of policy and the labor marketto further shape the understanding of how policies may be implemented and how that implementation affects breastfeeding initiation [15]. It is difficult to apply policy per se to the outcome; it is more likely to be the successful implementation of this policy.
The use of fsQCA as a method is not without its limitations. Schneider and Wagemann recommend standards for best practices of using QCA analysis, along with the conclusions that can reasonably be drawn from the method [37]. Specifically, they caution that individual pieces of the configurational solution should not be overinterpreted. They also caution that the configurational solution alone is not sufficient for determining causality [37]. In fact, QCA is useful for testing existing theories and exploring new theories [40]. The analysis in this manuscript should be seen as a beginninga way to connect the influences of public health and policy explanations for breastfeeding outcomes. Ragin [36] cautions that QCA techniques are best for exploring evidence and do not hold the same inferential capabilities as other methods. In the case of France and Ireland, for example, there is only one pathway. This is not causative, per se, but invites further study into the configurations. Indeed, adding additional cases may slightly change some of the pathways, but will add more richness to the analysis.
Conclusions
This research suggests that there is a connection between broad level welfare state polices, public health initiatives, and breastfeeding initiation. Compliance with the WHO/ UNICEF initiatives depends on welfare regime policies and overall support for women in both productive and reproductive labor. For example, Norway and Sweden have high participation in the BFHI and have initiated many parts of the Innocenti Declaration. The Innocenti Declaration, one of the most comprehensive WHO/UNICEF policies targeting breastfeeding, recommends supports for women to engage in reproductive labor, specifically breastfeeding, and includes provisions for women to combine breastfeeding with productive labor in recommending a minimum amount of paid maternity leave. Sweden and Norway also have policies that recognize women as both contributors to the labor market and as valued providers of carework. In these countries, women gain their rights and positions in society through both productive and reproductive labor, and carework is supported as a valued public good. Second, the results of the current study suggest that the absence of BFHI is significantly related to lower breastfeeding initiation. The results of the fsQCA analysis demonstrate that being part of the set of countries with low participation in the BFHI is a necessary condition leading towards being in the set of countries with low breastfeeding initiation.
This study provides a valuable framework from which to understand the relationship between national-level family polices, public health initiatives, and breastfeeding initiation among high-income, Western countries. Future studies may examine breastfeeding duration, changing in breastfeeding rates over time, and the influence of culture and religion on country-level outcomes.
|
2017-08-08T05:20:33.727Z
|
2017-07-27T00:00:00.000
|
{
"year": 2017,
"sha1": "0ec6fa4f9f00a3088d0f1dcb93e17d46a23959ed",
"oa_license": "CCBY",
"oa_url": "https://internationalbreastfeedingjournal.biomedcentral.com/track/pdf/10.1186/s13006-017-0122-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ec6fa4f9f00a3088d0f1dcb93e17d46a23959ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233375610
|
pes2o/s2orc
|
v3-fos-license
|
Human leukocyte antigen I is significantly downregulated in patients with myxoid liposarcomas
The characteristics of the tumor immune microenvironment remains unclear in liposarcomas, and here we aimed to determine the prognostic impact of the tumor immune microenvironment across separate liposarcomas subtypes. A total of 70 liposarcoma patients with three subtypes: myxoid liposarcoma (n = 45), dedifferentiated liposarcoma (n = 17), and pleomorphic liposarcoma (n = 8) were enrolled. The presence of tumor infiltrating lymphocytes (CD4+ , CD8+ , FOXP3+ lymphocytes) and CD163+ macrophages and expression of HLA class I and PD-L1 were assessed by immunohistochemistry in the diagnostic samples; overall survival and progression-free survival were estimated from outcome data. For infiltrating lymphocytes and macrophages, dedifferentiated liposarcoma and pleomorphic liposarcoma patients had a significantly higher number than myxoid liposarcoma patients. While myxoid liposarcoma patients with a high number of macrophages were associated with worse overall and progression-free survival, dedifferentiated liposarcoma patients with high macrophage numbers showed a trend toward favorable prognosis. Expression of HLA class I was negative in 35 of 45 (77.8%) myxoid liposarcoma tumors, whereas all dedifferentiated liposarcoma and pleomorphic liposarcoma tumors expressed HLA class I. The subset of myxoid liposarcoma patients with high HLA class I expression had significantly poor overall and progression-free survival, while dedifferentiated liposarcoma patients with high HLA class I expression tended to have favorable outcomes. Only four of 17 (23.5%) dedifferentiated liposarcomas, two of eight (25%) pleomorphic liposarcomas, and no myxoid liposarcoma tumors expressed PD-L1. Our results demonstrate the unique immune microenvironment of myxoid liposarcomas compared to other subtypes of liposarcomas, suggesting that the approach for immunotherapy in liposarcomas should be based on subtype. Supplementary Information The online version contains supplementary material available at 10.1007/s00262-021-02928-1.
Introduction
Liposarcomas (LPS) are malignant tumors of adipocytic differentiation and the most common soft tissue sarcoma subtypes, comprising approximately 15%-20% of soft tissue sarcomas in adults [1,2]. The 2020 World Health Organization classification [3] lists five histological subtypes for LPS: the intermediate atypical lipomatous tumor/well-differentiated liposarcoma (WDLPS), the malignant dedifferentiated liposarcoma (DDLPS), myxoid liposarcoma (MLPS), pleomorphic liposarcoma (PLPS), and myxoid pleomorphic liposarcoma. Each subtype has their own distinct clinical features. WDLPS and DDLPS represent the most common type of LPS, accounting for approximately 40-45% of LPS [3]. Both WDLPS and DDLPS usually exhibit a supernumerary ring and/or a giant rod chromosome with the amplification 1 3 of 12q13-15, which contains multiple genes that have been indicated to be contributing to the oncogenesis, such as MDM2, CDK4, HMG2A, and YEATS4 [2]. While the overall mutational burden of WDLPS is low, it is believed that the accumulation of additional genetic mutations leads to the development of DDLPS [1,2].
MLPS is the second most common type of liposarcomas, comprising 20-30% of liposarcomas [3]. Molecularly, it is characterized by a chromosomal translocation t (12;16) (q13; p11) that results in the FUS-DDIT3 (or CHOP) fusion protein in over 90% of patients, with a small number with a EWSR1-DDIT3 translocation [2,3]. Distant metastasis can commonly arise in various sites such as bone, retroperitoneum, and serosal surfaces, even in the absence of lung metastasis [2,3].
PLPS is the rarest variant of LPS, accounting for 5% of all LPS [3]. PLPS is also the most aggressive LPS subtype with a high rate of recurrence and metastasis [2]. However, current understanding of the molecular pathology of PLPS is limited by the rarity of this disease [1]. PLPS tends to show a complex karyotype including multiple chromosomal losses and gains, indicating a pathogenesis driven by complex and variable genomic aberrations [2].
Currently, wide-margin surgical resection remains the core curative option for LPS [2,3], and perioperative radiation is often offered to reduce local recurrence [4,5]. However, distant metastasis is not uncommon, and prognosis is exceptionally poor for these patients, with the use of chemotherapy and radiotherapy limited to advanced or recurrent cases [2]. The limitation in current treatment for aggressive LPS emphasizes the need for effective new systemic therapeutic approaches, such as immunotherapies.
Interaction between programmed cell death 1 and programmed death ligand 1 (PD-L1) plays an important role in tumor evasion through T cell inactivation. Previous research has demonstrated that high expression of PD-L1 correlates with worse prognosis in several malignancies [6,7]. While there have been reports indicating PD-L1 expression as a poor prognostic indicator in soft tissue sarcomas, these studies consisted of only a small number of LPS patients with all the subtypes lumped together. With the more current understanding of the molecular heterogeneity, further investigation of PD-L1 expression and the immune landscape in each subtype of LPS is warranted [8,9].
HLA class I proteins are expressed on virtually all nucleated cells and have several important functions in adaptive immunity [10]. HLA class I proteins can present foreign antigens to cytotoxic T cells either on antigen presenting cells such as dendritic cells or target cells, a process that is highly regulated. Furthermore, HLA class I proteins function as one of the most important inhibitory signals for natural killer (NK) cells, aiding NK cells to recognize non-self-cells by the lack of HLA class I proteins [10]. NK cells are a critical effector of antitumor innate immunity in cancer immune surveillance, and adoptive transfer of NK cells is considered an attractive immunotherapeutic option in patients with hematological malignancies and solid tumors [10,11].
The characteristics of the tumor immune microenvironment in each LPS subtype has not been assessed in a systemic fashion with survival outcome available. The aim of the current study is to assess the tumor immune microenvironment according to the distinct subtypes of LPS, ultimately to aid in the design of effective immunotherapeutic approaches in patients with LPS according to the distinct subtypes.
Patients and samples
Primary LPS patients with dedifferentiated liposarcoma (DDLPS), myxoid liposarcoma (MLPS), and pleomorphic liposarcoma (PLPS), who were diagnosed and treated at the University of Niigata between 1991 and 2018, were enrolled in this retrospective study. Patients with well-differentiated liposarcomas were excluded. Patients who did not have samples available prior to systemic therapy were excluded. A total of 70 patients were identified, and diagnostic samples (or surgical samples if upfront surgery was performed) were used for immunohistochemical examination. All samples were obtained by an open biopsy, with the exception of one sample obtained by a core needle biopsy. Among these patients, 16 patients also had metastatic tumor samples available. In all cases, hematoxylin and eosin staining slides were reviewed to confirm that the blocks included adequate viable tumor cells. Molecular diagnostic testing results, such as FUS/EWS-DDIT3 fusion gene or amplification of MDM2, were collected when available. Clinical data were also extracted from medical records for statistical analysis.
This study was approved by the Institutional Review Board of Niigata University (No. 2016-0024) and was conducted in accordance with the Declaration of Helsinki. All patients gave written informed consent prior to participation in this research.
Immunohistochemistry
Immunohistochemical staining for tumor infiltrating lymphocytes (TILs; CD4, CD8, FOXP3), CD163+ macrophages, HLA class I, and PD-L1 were carried out as previously described [12]. Slides were stained with the primary antibodies summarized in Supp. Table 1. Next, the slides were treated with Histofine Simple Stain MAX PO MULTI (Nichirei Bioscience, Tokyo Japan), and the peroxidase 1 3 activity was detected with Simple Stain DAB (Nichirei Biosciences). Finally, slides were counterstained with hematoxylin (Vector Laboratories Inc., Burlingame, CA). Appropriate positive and negative control was prepared for CD4, CD8, FOXP3, CD163, and PD-L1. Intact expression of HLA class I was confirmed by staining of endothelial cells.
Evaluation of immunohistochemistry
To enumerate tumor infiltrating lymphocytes and macrophages, areas with the most abundant lymphocytes or macrophages were selected in each section at a low power magnification. The sections were then photographed with an Olympus DP73 digital camera (Olympus, Tokyo, Japan) from maximum of five high power fields (×200), and cells were counted manually. The count was conducted two times by an experienced pathologist who was blinded to the clinical information of the patients, and the average number was used as the final value of lymphocytes and macrophages in each patient. Then, the median value was used to distinguish the patients into a high or low infiltration group. For HLA class I, we graded the expression status according to previous reports [12]: high (number of positive cells ≥ 50%), low (≤ 5% number of positive cells < 50%), and negative (number of positive cells < 5%). PD-L1 positivity was defined when the positive cells were more than 1% as previously reported [12].
Statistical analysis
Statistical analyses were conducted with GraphPad Prism v8.0 software (La Jolla, California, USA). ANOVA test was used to evaluate the statistical significance between more than two groups. The Kaplan-Meier method was used to estimate overall survival (OS) and progression-free survival (PFS) probabilities in patients with MLPS and DDLPS. OS and PFS were measured from the date of the initial biopsy. A terminal point of OS was determined as the time to death or the time the patient was last seen. A terminal point of PFS was determined as the time of local recurrence, distant metastasis, disease progression, or last seen. Survival differences were analyzed by the log-rank test. A p value less than 0.05 was considered statistically significant.
Clinical characteristics
Patient clinical characteristics are summarized in Table 1. There were 43 male patients and 27 female patients with a mean age of 57.7 years (18-86). Lung metastasis was observed in 3 of 70 patients at diagnosis. Six tumors were in the upper extremities, 59 tumors in lower extremities, and five tumors in the trunk. The disease stage was classified according to the American Joint Committee on Cancer 8th edition staging system [13], and 3 patients were stage II, 34 were stage IIIA, 30 were IIIB, and 3 were stage IV.
Infiltration of lymphocytes and macrophages
TILs and macrophages were enumerated by averaging two separate counts from five high power fields per slide ( Fig. 1a-k). TILs were defined by CD4 or CD8 positivity, and FOXP3 was used to further define CD4+ regulatory T cells. Macrophages were defined by CD163 positivity. The median numbers of TILs and CD163+ macrophages enumerated in each histological subtype were summarized (Table 2). Interestingly, samples from DDLPS patients and PLPS patients had significantly more TILs and macrophages identified than patients with MLPS ( Fig. 2 a-
Expression of HLA class I and PD-L1
Here, MLPS patients were found to have significantly less expression of HLA class I compared to DDLPS patients or PLPS patients (Fig. 3a) . 3b). None of the DDLPS or PLPS patients had negative HLA class I expression. Next, we analyzed the relationship between HLA class I expression and infiltration of CD8+ lymphocytes in patients with MLPS and DDLPS. In both subtypes, patients with higher expression of HLA class I demonstrated a trend toward higher infiltration of CD8+ lymphocytes but did not achieve a statistical significance (Fig. 3c, d).
For PD-L1, none of the MLPS patient samples expressed PD-L1, whereas 23.5% (4 of 17 cases) in DDLPS and 25% (2 of 8 cases) in PLPS were positive for PD-L1 (Fig. 4a, b). For the 16 MLPS patients where metastatic samples were available, HLA class I and PD-L1expression were evaluated, and all samples were negative for both HLA class I and PD-L1 (Supp. Fig. 1). In MLPS patients, high number of CD163+ macrophages was significantly associated with unfavorable PFS (Fig. 5d) and OS (Supp. Fig. 2d). In patients with DDLPS, patients with higher number of infiltrating CD163+ macrophages demonstrated a tendency to have a favorable OS (Supp. Fig. 2i), although this did not reach statistical significance. CD4+ or CD8+ TILs or FOXP3+ Treg numbers did not demonstrate discernable survival tendencies (Fig. 5a-c, f-h, Supp. Fig. 2a-c, f-h).
For PD-L1 expression, there was no prognostic impact on OS or PFS in DDLPS patients (Fig. 5k, Supp. Fig. 2k). High expression of HLA class I showed unfavorable prognosis in patients with MPLS (Fig. 5e, Supp. Fig. 2e), and while not statistically significant, high expression of HLA class I tended to predict a favorable PFS in DDLPS patients (Fig. 5j).
Discussion
While there have been some reports of investigating the tumor immune microenvironment of soft tissue sarcomas [8,14,15], there have been limitations, such as including a mixture of treated and untreated samples [14], or lumping all the subtypes of LPS together, despite the known differences in biology and clinical behavior among the subtypes of LPS [8,15]. This study is the first large study to systematically characterize the tumor immune microenvironment The mechanism of tumor immune invasion is complex, and not much is known. There have been reports that nontranslocation-associated sarcomas have higher numbers of TILs than translocation-associated sarcomas, with DDLPS having the highest number of TILs among any other histological types [14], consistent with our results. There have been previous pan-cancer analyses that suggest that TIL burden is negatively correlated with copy number alterations [16]. DDLPS and PLPS are considered copy-number-driven sarcomas with low somatic mutation rates [1], highlighting the complex nature of mechanisms that drive infiltration of TILs. Furthermore, tumor associated macrophages (TAMs) are also an important component of the tumor immune microenvironment, and recent reports have demonstrated that the number of TAMs is significantly higher than that of TILs in many sarcomas, indicating a uniquely important role of macrophages in the tumor immune microenvironment of sarcomas [17]. It has been suggested that TAMs be classified into M1-like antitumoral macrophages and M2-like pro-tumoral macrophages [18], and M2-like macrophages are considered to have an important role in tumor progression [18,19]. CD163 have been used as useful markers of M2-like macrophages, and higher infiltration of CD163+ macrophages is generally correlated with poor prognosis in several malignancies [19,20]. For sarcomas, TAMs have been associated with poor prognosis in Ewing sarcomas [21] and synovial sarcomas [12]. Higher infiltration of CD163+ macrophages has also been correlated with poor prognosis in MLPS [22], consistent with our results. Interestingly, in our study, DDLPS patients with higher infiltration of CD163+ macrophages showed a trend toward favorable outcome, which was also seen in the results of Dancsok et al. [17], perhaps pointing to the uniqueness of DDLPS and complexity of how CD163+ macrophages contribute to disease progression. The CD47/signal-regulatory protein α (SIRPα) complex is key macrophage-related immune check point, which has been increasingly recognized as a promising therapeutic target in DDLPS [23,24]. In LPS, the prognostic impact of CD47/SIRPα signaling in patients has not been reported, although many patients have been found to have CD47 expression in tumor cells, as well as infiltrating SIRPα positive macrophages [17]. Further studies uncovering the role of CD47/SIRPα signaling in LPS are warranted.
The recent success of immune checkpoint inhibitors, such as PD-1 or PD-L1 inhibitors in some malignancies, has garnered increased interest in immunomodulatory therapies [25,26]. In the phase 2 clinical trial of anit-PD-1 inhibitor pembrolizumab in advanced soft tissue sarcomas (SARC028) [27], it was noted that higher baseline density of TILs in the tumor immune microenvironment was correlated with objective response rate [28], and patients with a B cell rich immune signature demonstrated high response rates [29]. Furthermore, although efficacy of pembrolizumab was limited in this study, objective response was achieved in two of ten patients with DDLPS [27]. While it is notable that the association between PD-L1 expression and response to immune check point inhibitors remains unclear, our results suggest that DDLPS and PLPS, which accumulate higher mutational burden than MLPS, provide higher immunogenicity, and are more likely to respond to immune checkpoint inhibitors than MLPS. The current understanding of the molecular biology of PLPS is limited, and efficacy of immune checkpoint inhibitors in PLPS remains unclear. However, our findings may indicate that anti-PD-1 therapy may be also promising in patients, considering the similar tumor immune microenvironment of PLPS to that of DDLPS.
Considering that MLPS are translocation-driven sarcomas with a low mutation burden and T cell infiltration, it seems that immunostimulatory approaches may be suitable for MLPS. Immunostimulatory therapies employing adoptive T cell transfer such as genetically engineered T cell receptor therapy and chimeric antigen receptor therapy have demonstrated dramatic effects in some malignancies [30,31]. Immunostimulatory therapy has been of particularly high interest in sarcomas, since a majority of patients with MLPS express highly immunogenic cancer-testis antigen New York esophageal squamous cell carcinoma 1 (NY-ESO-1) [32,33]. NY-ESO-1 is considered to be an attractive immunotherapeutic target because cancer-testis antigens are expressed only in germ cells of the testis but not in other adult tissues and are atypically re-expressed in various malignant tumors [32,33]. NY-ESO-1 is also expressed in approximately 80% of synovial sarcoma [34] and immunotherapies with an autologous T cell transduced with a T cell receptor directed against NY-ESO-1 have demonstrated efficacy in patients with metastatic or refractory synovial sarcoma [35]. While immunotherapies against NY-ESO-1 are promising for patients with MLPS, antigen-specific adoptive T cell therapies require HLA class I expression on targeted cells for recognition. Antigen presentation by HLA class I expression on tumor surface is essential for the recognition of tumor cells by conventional CD8+ T cells. It is also known that loss or down regulation of HLA class I Fig. 3 Expression of HLA class I in each histological type of Liposarcomas. a Various expression levels were found in patients with MLPS, whereas no patents were negative for HLA class I expression. Scale bar represents 50 µm, b most patients with MLPS showed lost or downregulation of HLA class I expression, c and d number of infiltrated CD8+ lymphocytes tend to be higher in patients with high expression of HLA class I in both MLPS and DDLPS. MLPS myxoid liposarcoma, DDLPS dedifferentiated liposarcoma, PLPS pleomorphic liposarcoma ◂ 1 3 molecules is a common mechanism for tumor cells to escape from recognition by CD8+ T cells [36]. In the past, Pollack et al. [37] have suggested that MLPS may evade immune recognition through expression of a lower level of HLA class I; however, this has only been indicated by evaluating gene expression by RNA-seq. In our study, we report that protein expression of HLA class I is lost or downregulated in a majority of MLPS; furthermore, we found that all 16 metastatic specimens showed loss of HLA class I expression. Although further analysis to clarify the underlying mechanisms of downregulation of HLA class I in MLPS is warranted, these results suggest that the loss or downregulation of HLA class I expression may be a substantial obstacle in T cell-based immunotherapies for patients with MLPS. Zhang et al. [36] performed interferon-γ (IFN-γ) treatment in patients with synovial sarcoma and MLPS and demonstrated that IFN-γ treatment can increase expression level of HLA class I and PD-L1. Interestingly, this study included two patients with MLPS, and both patients were negative for HLA class I initially, but expression level of HLA class I became detectable after the IFN-γ treatments. Significantly this implies that the tumor immune microenvironment in patients with MLPS could be manipulated to facilitate immunotherapies including both immunomodulatory therapy and immunostimulants therapy. Considering the inhibitory role of HLA class I in NK cell function, our results also suggest NK cell therapies could be a promising treatment option for patients with MLPS [38]. Although the effect of adoptive transfer of NK cell therapy has been demonstrated in hematologic malignancies, the efficacy of NK cell therapy in solid tumors has been limited to early stage patients who have minimal residual tumor [39]. Our findings demonstrating the lack of HLA class I in MLPS suggests that in MLPS, adoptive NK cell therapies could be a promising treatment option for patients where wide resections are not possible or among those who have metastatic disease.
There are several limitations to be noted. First, despite the large number of MLPS patients, the number of DDLPS and PLPS patients enrolled in the current study is relatively low. These may have led to some inconclusive findings in the survival analyses for DDLPS and PLPS that did not reach statistical significance. Second, in many cases, molecular confirmation of the diagnosis was not available, and there is a possibility that there are patients subclassified inaccurately, which may impact the results. Third, while there are now many immune checkpoint markers known, we only evaluated PD-L1 expression. It is possible that MLPS tumors express other immune checkpoint molecules that we did not investigate, such as PD-L2 and TIM-3, among many [40]. Furthermore, although additional biomarkers such as tumor mutation burden, microsatellite instability, and DNA mismatch repair functional status have shown predictive response to immune checkpoint blockade, we did not investigate these biomarkers in this study [41][42][43].
Finally, a recent report from Petitprez et al. [29] has demonstrated that a strong B lineage gene signature determined by the MCP-counter tool was significantly associated with improved overall survival regardless of other immune factors such as high or low CD8+ TILs or FOXP3+ Tregs. Furthermore, a subclassification of patients treated on SARC028 demonstrated that a group defined by a unique immune profile characterized by the high density of B cells and presence of tertiary lymphoid structure could yield the highest response rate to PD-1 blockade therapy, further highlighting the importance of B cells in the tumor microenvironment in sarcomas. In our study, we did not evaluate for the infiltration of B cells, which clearly is a limitation. Further analysis will be necessary to evaluate the contribution of B cells in the LPS microenvironment. Additional larger scale studies are necessary to further dissect how various LPS tumors manipulate their immune No immune characteristics were found to be significantly associated with difference of PFS in patients with DDLPS (f-k). MLPS myxoid liposarcoma, DDLPS dedifferentiated liposarcoma microenvironment to evade immune surveillance and to assess the role of immunotherapeutic approaches in LPS.
In conclusion, here we have demonstrated that MLPS have a distinct tumor immune microenvironment from other LPS subtypes. While the overall number of infiltrating TILs and macrophages in MLPS patients were significantly less than in patients with DDLPS or PLPS, those with high macrophage numbers were shown to have poor outcome. In addition, loss or down regulation of HLA class I was frequently found in patients with MLPS. Furthermore, no patients with MLPS were positive for PD-L1, whereas about one quarter of patients with DDLPS and PLPS were positive. Overall, the tumor immune microenvironment of the translocation-associated MLPS is markedly different from the non-translocation-associated DDLPS and PLPS, suggesting that current approaches to cancer immunotherapies consisting of immunostimulatory and immunomodulatory approaches [25,26,35] may not be as effective in MLPS compared to other subtypes of LPS.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-04-24T14:01:52.303Z
|
2021-04-24T00:00:00.000
|
{
"year": 2021,
"sha1": "8baad619e2b7eb9030796114bd900b8c1e6e4e60",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00262-021-02928-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8baad619e2b7eb9030796114bd900b8c1e6e4e60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4426021
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced expression of histone chaperone APLF associate with breast cancer
DNA damage-specific histone chaperone Aprataxin PNK-like factor (APLF) regulates mesenchymal-to-epithelial transition (MET) during cellular reprogramming. We investigated the role of APLF in epithelial-to-mesenchymal transition (EMT) linked to breast cancer invasiveness and metastasis. Here, we show that a significant manifestation of APLF is present in tumor sections of patients with invasive ductal carcinoma when compared to their normal adjacent tissues. APLF was significantly induced in triple negative breast cancer (TNBC) cells, MDAMB-231, in comparison to invasive MCF7 or normal MCF10A breast cells and supported by studies on invasive breast carcinoma in The Cancer Genome Atlas (TCGA). Functionally, APLF downregulation inhibited proliferative capacity, altered cell cycle behavior, induced apoptosis and impaired DNA repair ability of MDAMB-231 cells. Reduction in APLF level impeded invasive, migratory, tumorigenic and metastatic potential of TNBC cells with loss in expression of genes associated with EMT while upregulation of MET-specific gene E-cadherin (CDH1). So, here we provided novel evidence for enrichment of APLF in breast tumors, which could regulate metastasis-associated EMT in invasive breast cancer. We anticipate that APLF could be exploited as a biomarker for breast tumors and additionally could be targeted in sensitizing cancer cells towards DNA damaging agents. Electronic supplementary material The online version of this article (10.1186/s12943-018-0826-9) contains supplementary material, which is available to authorized users.
Recently, we demonstrated that reduced expression of histone chaperone APLF could enhance the kinetics and efficiency of reprogramming of mouse embryonic fibroblasts to induced pluripotent stem cells [1]. APLF act as a DNA repair factor involved in non-homologous end joining (NHEJ) mediated repair of DNA double strand breaks (DSBs) [2]. As a histone chaperone, APLF specifically bind to histones H3/H4 tetramer and could recruit variants of histone H2A at the damaged sites on DNA [3]. We demonstrated that APLF downregulation augmented mesenchymal-to-epithelial transition (MET) associated with cellular reprogramming of mouse embryonic fibroblasts [1]. Thus, we hypothesized that APLF could induce the reverse phenomenon, epithelial-to-mesenchymal transition (EMT). Being a salient feature of development and wound healing, EMT additionally attributes to the invasive and metastatic behavior of tumor cells. So, based on the involvement of APLF in MET and DNA repair, we studied the role of APLF in breast cancer.
Enhanced expression of APLF in breast cancer
Tissue array (US Biomax, Inc., #OD-CT-RpBre03-004) for invasive ductal breast carcinoma with 31 matched control and normal (adjacent tissue or ANT) sections were investigated for the expression of APLF. Tumor sections demonstrated significant enrichment in APLF expression compared to matched normal sections (Fig. 1a, b). To ensure the uniformity of the results, we acquired invasive ductal breast tumors and ANTs from Regional Cancer Centre, Thiruvananthapuram, India and performed IHC with another APLF antibody (indicated as #2), generated in Prof. Ivan Ahel lab at Oxford University (a kind gift) [3] along with the commercially available antibody (antibody #1). A similar trend in APLF appearance in the tumor vs. normal adjacent section was observed irrespective of the source of antibody or patient sample (Additional file 1: Figure S1A). Immunofluorescence study on APLF expression in the aforementioned samples demonstrated significantly enhanced expression of APLF in the tumor section (Additional file 1: Figure S1B). Analysis of TCGA study on invasive breast carcinoma [4] for APLF alterations, comprising of a large cohort of patient samples (n = 817), further demonstrated an upregulation of APLF mRNA in invasive ductal carcinoma (IDC) of basal origin (27%) and in patients with triple negative breast tumors (22%) (Fig. 1c). Thus, TCGA study demonstrated substantial increase in APLF level with increase in invasive behavior of the breast cancer subtypes. Next, we determined APLF level among cell lines with varying degree of invasive and metastatic potential. Normal breast cells MCF10A expressed minimal level of APLF followed by invasive MCF7 cells and the highest in TNBC MDAMB-231 cells, both being derived from patients diagnosed with IDC (Additional file 1: Figure S2A). Significantly lower level of APLF was expressed in pure luminal Fig. 1 Enhanced APLF expression mark breast cancer. a Representative picture from tissue array (US biomax Inc., #OD-CT-RpBre03-004) for invasive ductal breast carcinoma (IDC) with 31 matched control and normal (adjacent tissue or ANT) sections were investigated for the expression of APLF by IHC following standard protocol. Scale bar: 50 μm. b Plots represent expression of APLF in adjacent normal tissues (ANT) and matched tumors respectively. Expression values were determined by histo-scoring and expressed as median with 95% confidence interval. Statistical significance was determined using Wilcoxon rank sum test. A.U. = Arbitrary Unit. c TCGA study on invasive breast carcinoma samples was analyzed. Pie chart represents the percentage of alteration in APLF mRNA expression among different subtypes of breast cancer patients [5]. d Different breast cancer cell lines were considered for the expression of APLF with varying degree of invasive potential. mRNA and protein were isolated from all these cell lines and observed for the expression of APLF by qRT-PCR (upper panel) and western blot analysis (lower panel). Error bar = S.E.M for three independent experiments. Statistical analysis was performed using Student t-Test function, *p < 0.05, **p < 0.01. Band intensity was measured by ImageJ software, RBI = Relative Band Intensity. A representative image for the blot has been presented subtype MCF7 and T47D than that of cell lines of basal origin including MDAMB-468, SUM149 and MDAMB-231 (Fig. 1d). Migration and invasive potential of MCF7 was least, followed by SKBR3 and the maximum in MDAMB-231 cells (Additional file 1: Figure S2B, S2C). Increase in APLF expression in MDAMB-231 cells was further confirmed by the use of APLF antibody#2 (Additional file 1: Figure S2D). We were intrigued to understand whether this is a general phenomenon prevalent in any cancer or restricted to breast cancer only? So, we analyzed Cancer Cell Line Encyclopedia for APLF expression in invasive and metastatic cell lines that originated from different primary adenocarcinomas [5]. Interestingly, reduced expression of APLF was observed in cell lines from pancreas, large intestine and prostate unlike induced expression in MDAMB-231 cells (Additional file 1: Figure S2E). In syngeneic colon cancer cell lines, SW480 and SW620, no significant change in APLF expression was observed upon consideration of their different metastatic potential (Additional file 1: Figure S2F). Thus, tissue array of patient samples, cell lines and analyses of TCGA studies provided evidence that APLF upregulation associate with breast cancer and thus warrant an understanding on the role of APLF in breast cancer.
APLF regulate cellular machinery
Often, proliferation potential of cancer cells has been correlated to the aggressiveness of the tumor. As APLF expression is highest in MDAMB-231 cells, we exploited them for further mechanistic studies. To understand the role of enhanced expression of APLF, we stably downregulated APLF expression by shRNA mediated knockdown (mentioned as APLF-kd hereafter). A significant depletion in APLF expression to 90% was observed in MDAMB-231 cells (Fig. 2a). In silico analysis for checking shRNA specificity using SpliceCenter [http://projects.insilico.us/SpliceCenter/siRNACheck], web-based bioinformatics tool, could not detect any off-targets for this APLF shRNA (Additional file 1: Figure S3A). An additional assay was performed to check the specificity of the APLF antibody. HEK293 cells, which has very low to almost undetectable endogenous level of APLF, upon ectopic expression, showed significant increase in APLF level and thus proved the specificity of the antibody (Additional file 1: Figure S3B, S3C). APLF-downregulation decreased the survivability of MDAMB-231 cells to 20% of control cells (Fig. 2b). Cell cycle analysis, after synchronization at G0/G1 phase by serum starvation, demonstrated significant increase in G1-phase population of APLF-kd cells whereas reduction in S and G2/M phase population compared to control cells (Fig. 2c). But, is this effect of APLF restricted to tumor-specific cells or is a global phenomenon? To answer that, we knocked down the APLF expression in MCF10A, normal breast cell line (Additional file 1: Figure S4A). Downregulation of APLF could not alter the distribution of MCF10A cell population among different phases of cell cycle when compared to the control cells (Additional file 1: Figure S4B). Phenotypically, APLF-kd cells were indistinguishable from the control MCF10A cells (Additional file 1: Figure S4C). Thus, effect of APLF downregulation is tumor-specific and not global.
Ectopic expression of APLF in MDAMB-231 cells (Additional file 1: Figure S4D, S4E) resulted in increased S-phase and G2/M-phase population while reduction in G1-phase population compared to control MDAMB-231 cells (Additional file 1: Figure S4F). CyclinD1, associated with increased rate of proliferation, marks G1-S phase transition and APLF-downregulation resulted in reduced CYCLIN D1 level in MDAMB-231 cells (Fig. 2d). Interestingly, histone variant MacroH2A.1, a repressive chromatin mark, could regulate the Cyclin D1 expression in different types of cancer including melanoma and osteosarcoma. APLF could recruit MacroH2A.1 at DNA damage site [3] as well as within the promoters of transcription factor associated with EMT [1]. Upon downregulation of APLF, significant enrichment of MACROH2A.1 was observed at the CYCLIN D1 promoter in APLF-kd cells in comparison to control cells (Additional file 1: Figure S4G). This recruitment could lead to the repression in Cyclin D1 expression resulting in inhibition in proliferation of APLF-kd MDAMB-231 cells.
APLF −/− mice have a mildly attenuated DNA repair defect and could delay the development of myeloid neoplasm upon exposure to ionizing radiation. In response to DNA-double strand break inducing agent etoposide, at 0 h post-recovery, γH2AX accumulation significantly increased in APLF-kd cells than in control cells (Fig. 2e). With time, γH2AX level reduced and achieved a similar level to untreated control cells at 24 h, whereas APLF-kd cells retained~40% of γH2AX level even after 24 h of recovery (Fig. 2e). So, downregulation of APLF could compromise damage-induced DNA repair machinery thereby enhancing the probability in cell death arising from inefficient DNA damage response. APLF is a constituent of the NHEJ repair pathway. Along with its interacting partner PARP3, APLF accelerates the NHEJ by providing a scaffold for the recruitment of Ku complexes [2]. So, a defect in the DNA repair is not unexpected in cells with reduced level of APLF.
Loss in survivability in response to APLF downregulation in MDAMB-231 cells might be the manifestation in cell death by apoptosis or necrosis. Annexin-V and PI staining demonstrated significant increase in apoptosis in APLF-kd MDAMB-231 cells (population doubled in APLF-kd cells to control cells) (Additional file 1: Figure S4H). This was further confirmed by induced expression of cleaved Caspase 3, the primary apoptotic factor associated with DNA fragmentation (Fig. 2f).
To demonstrate the specificity of the shRNA, APLF shRNA-resistant MDAMB-231 cells were analyzed for the effect of APLF-knockdown. Cell survivability was significantly increased in APLF-shRNA-resistant cells in comparison to APLF-kd cells (Additional file 1: S5A, S5B). Distribution of cell population among the different phases in APLF-shRNA-resistant cells was similar to the control MDAMB-231 cells (Fig. 2c). So, functionally APLF could modulate the entry of cells into cell cycle and interfere with the proliferative capacity of the breast cancer cells.
APLF modulate invasive, tumorigenic and metastatic potential
To determine the role of APLF in invasiveness, we performed an in vitro matrigel invasion assay. Only 2% of APLF-kd cells could invade the membrane in comparison to 50% of control MDAMB-231 cells (Fig. 2g). Prior to this, we detected whether there is any significant loss in cell number due to apoptosis in APLF-kd MDAMB-231 cells during the time period necessary to perform the invasion assay. Around 40 h of continuous culture did not induce apoptosis in APLF-kd cells in comparison to control cells (Additional file 1: Figure S6A). Ectopic expression of APLF (Additional file 1: Figure S4D, S4E), enhanced the invading potential to 2-fold compared to control cells (Additional file 1: Figure S6B). Cell migration, measured by wound closure assay, was significantly reduced in APLF-kd cells to control cells (Fig. 2h). Upon subcutaneous injection of control and APLF-kd cells into immune-compromised mice, tumor generated from APLF-kd cells was significantly reduced in size than the tumors generated from control cells (Fig. 2i). This was expected due to severe loss in cell survivability, cell cycle arrest and increased apoptosis upon downregulation of APLF in MDAMB-231 cells. Next, we evaluated the effect of APLF depletion on metastasis. Upon lateral tail vein injection of control and APLF-kd MDAMB-231 cells into immune-compromised mice, significant increase in number of metastatic nodules in the lungs were observed in mice injected with control cells (Fig. 2j, k). mRNA analysis demonstrated significant reduction in the APLF level within the lung sections derived from mice injected with APLF-kd cells (Fig. 2l). Thus, APLF-downregulation impeded invasive, tumorigenic and metastatic potential of TNBC, MDAMB-231 cells. Although cell proliferation, tumor growth or invasion/migration potential are all distinct features of carcinogenesis, but what we could infer from the results demonstrated here that cell cycle arrest resulting due to APLF downregulation in MDAMB-231 cells might be the primary effect of APLF in breast cancer, which further lead to the decrease in tumor growth.
Recent reports suggest that induced expression of DNA repair factor associate with cancer metastasis [6]. Expressions of a panel of 34 DNA-repair genes from two different studies on breast primary tumor were significantly increased in tumors of metastatic origin [6]. As APLF knockdown could impede the metastatic behavior of MDAMB-231 cells (Fig. 2j-l), we determined the expression of these DNA repair related genes in response to APLF downregulation. GSEA of these genes clustered them into different repair pathways including Mismatch Repair (MMR), Base Excision Repair (BER), Homologous Recombination (HR), DNA replication, Nucleotide (See figure on previous page.) Fig. 2 APLF downregulation influence cellular machinery and impede invasive, tumorigenic and metastatic potential of metastatic MDAMB-231 cells. a MDAMB-231 cells were transduced with lentiviral particles expressing shRNA against APLF or empty pLKO.1 vector (empty vector). Lentiviral vectors containing shRNA targeting human APLF was cloned in the pLKO.1 (Addgene) vector [1]. Extent of knockdown was measured at the protein level by western blot. b Viability of the control and APLF-kd MDAMB-231 cells were determined by MTT assay. c Cell cycle analysis for both control and APLF-kd cells were performed. Representative plot indicate the percentage of cells present in a given phase for control and APLF-kd cells. d G1/S-phase specific marker, CYCLIN D1 level was determined in control and APLF-kd cells by western blot analysis. e Control and APLF-kd cells were exposed to DNA DSB inducing agent etoposide (10 μM) for 4 h followed by recovery in absence of etoposide. γH2AX-positive foci cells were determined by immunofluorescence analysis to demonstrate the defect in DNA repairs after 0 h and 24 h of recovery period. Bar graph representing the fraction of γH2AX-positive foci in control and APLF-kd cells. Nuclei with ≥5 foci were counted as positive. f Same set of samples was analyzed for the expression of cleaved Caspase 3 by western blot as a measure of apoptosis in response to APLF-knockdown. g Invasion assay was performed in invasion chamber from Corning (Corning® BioCoat™ Matrigel® Invasion Chamber; 354,480). The graph represent the percentage of cells invaded and expressed in terms number of cells invaded to total number of cells added to the upper chamber at the start of the experiment. h Same set of cells were investigated for their migration or wound healing potential. Bar graph represents percentage of wound recovery expressed in terms of [1-(Width of the wound at a given time/width of the wound at t = 0)] for control and APLF-kd MDAMB-231 cells.
i Control and APLF-kd MDAMB-231 cells were subcutaneously injected in female NOD/SCID mice (n = 3 for each group; age = 6-8 weeks). After 5 weeks, mice injected with control cells developed tumors of significantly bigger size than in mice injected with APLF-kd cells. Representative picture has been included and the experiment was repeated independently 3 times. j, k To determine the effect of APLF on in vivo metastatic potential, both control and APLF-kd cells were injected into the lateral tail vein of female NOD/SCID mice (n = 3 for each group; age = 6-8 weeks). Prior to this, control and APLF-kd MDAMB were transfected with pEGFPC1 (Clonetech; 6084-1). After 6 weeks of injection, lungs were dissected and examined for the presence of metastatic nodules (black arrows). Representative lung and H&E staining of metastatic tumor are shown. l Expression of APLF in lungs was determined by RT-PCR. Human APLF and ACTIN confirmed the presence of MDAMB-231 cells in the lungs section. Mouse Gapdh was used as the negative control. m Control and APLF-kd MDAMB-231 cells were investigated for the expression of DNA repair genes associated with breast cancer metastasis. mRNA was extracted and analyzed for the expression of genes by qRT-PCR. Error bar = S.E.M for three independent experiments. Statistical analyses were performed using Student t-Test function, *p < 0.05, **p < 0.01 Excision Repair and p53-signaling pathway (Additional file 2) [6]. Representative genes from all pathways were screened and upon APLF downregulation, 7 genes including BRCA1, FANCG, PCNA, MSH5, TERF1, RAD21 and UBE2V2 were significantly downregulated whereas CRY2 and SMC4 were upregulated while other repair genes remained unaltered (Fig. 2m). Thus, APLF mediated regulation of DNA repair genes could further contribute to breast cancer metastasis.
APLF regulate genes in EMT
In order to metastasize, epithelial cells dislodge into blood circulation from primary tumors by invading through the basal lamina. EMT has been attributed to these phenomena and is often guided by a set of genes. Upon APLFknockdown, EMT-specific markers including SNAI1, SNAI2 were downregulated whereas MET-favoring CDH1 level was significantly upregulated (Fig. 3a, b). An EMT gene set analysis from Molecular Signature database (Gene ontology, GO:0001837) [7] for TCGA invasive breast carcinoma study [5] showed majority of EMT-specific genes correlated to the upregulation of APLF (Additional file 1: Figure S7A). Individual assessment of genes, not included in the original study [7] (Additional file 3) demonstrated significant correlation in expression of EMTgenes with APLF (Additional file 1: Figure S7B). Ectopic expression of APLF in MDAMB-231 cells resulted in enhanced expression of EMT-specific genes (Additional file 1: Figure S8).
But, mechanistically how APLF could regulate these genes? On this context, we explored the role of the histone variant MacroH2A.1 associated with APLF during reprogramming and DNA damage induced repair [1,2]. Histone variant MacroH2A.1 represent compact chromatin resulting in repressed locus. We observed that MacroH2A.1, encoded by H2AFY, expression increased upon downregulation of APLF both at the mRNA and protein level, but statistically insignificant (Additional file 1: Figure S9A, S9B), although, TCGA study [5] analysis showed significant downregulation of H2AFY in patients with increased APLF expression (Additional file 1: Figure S7B). So, we determined MACROH2A.1 recruitment at promoter of EMT-specific genes. We observed that SNAI1, SNAI2, MMP3 and MMP9 promoters were significantly enriched with MACROH2A.1 level in APLF-kd cells as compared to control MDAMB-231 cells (Fig. 3c). The repressed loci of EMT-related genes further conform to our observation in downregulation of these genes in APLF-kd MDAMB-231 cells (Fig. 3a, b). But, no significant loss in MACROH2A.1 was observed at the CDH1 promoter in APLF-kd cells (Additional file 1: Figure S9C), which indicated the presence of an additional mechanism operating in regulation of CDH1 as a function of APLF level.
Forkhead box protein A1 (FOXA1) belongs to the group of forkhead transcription factors and termed as "pioneer factor" in breast cancer [8]. FOXA1 repression shifts breast cancer subtype from luminal to basal while its expression restricts metastasis of luminal subtype by inducing CDH1 level [9]. Interestingly co-expression analysis of invasive breast carcinoma samples in TCGA [5], revealed highest negative Pearson score of APLF with FOXA1 (Pearson score = − 0.48, Fig. 3d). FOXA1 expression was upregulated in APLF-kd cells (Fig. 3e). It is known that FOXA1 influence chromatin reorganization by binding to heterochromatin and thereby facilitating expression of the gene relieved from compaction [8]. Expectedly, we failed to observe FOXA1 recruitment at the conserved CDH1 promoter [10] in control MDAMB-231 cells whereas a significant enrichment of FOXA1 was detected in APLF-kd cells (Fig. 3f). So downregulation of APLF enhanced CDH1 expression by recruitment of induced FOXA1 at CDH1 promoter and hence might restrict EMT of MDAMB-231 cells.
But, how APLF could regulate FOXA1? No significant difference in MACROH2A.1 recruitment at FOXA1 promoter was observed in control and APLF-kd cells (Additional file 1: Figure S9D). Then how binding of FOXA1 could be facilitated under this condition? We reasoned that FOXA1 being "pioneer factor" could bind to its target site even in compact state of the chromatin [8]. Additionally, FOXA1 recruitment is facilitated by different histone modification marks including histone H3K4me2 [8]. Interestingly, during reprogramming of MEFs to iPSCs, APLF-downregulation induced H3K4me2 level [1]. This might be an additional contribution towards an enhanced recruitment of FOXA1 at the CDH1 promoter in APLF-kd MDAMB-231 cells. But, eventually, what drives the upregulation of FOXA1 in APLF-kd cells? Loss in FOXA1 expression in metastatic MDAMB-231 cells corresponded to its hypermethylated promoter including both repressive histone H3K27me3 level and DNA methylation [9]. Histone chaperones in nature could interact with histone modifying enzymes and thereby could modulate different histone modification patterns. Histone H3K27 is tri-methylated by components of Polycomb Repressor Complex2, namely EZH2 and EZH1. Among them, Enhancer of Zeste2 (EZH2), has been implicated in metastasis and invasiveness of breast cancer. So, we studied the EZH2/H3K27me3 axis in regulation of FOXA1 as a function of APLF. TCGA sample analysis for invasive breast cancer [5], demonstrated a positive correlation of EZH2 expression in patients with upregulated APLF expression (Additional file 1: Figure S7B). Upon downregulation of APLF, EZH2 expression was reduced both at the mRNA and protein level in MDAMB-231 cells (Additional file 1: Figure S9E) (Fig. 3g). We observed that APLF-knockdown could significantly enhance the recruitment of MACROH2A.1 at the EZH2 promoter in MDAMB-231 cells (Additional file 1: Figure S9F). This could account for the loss in EZH2 expression. To investigate further, we studied the recruitment of EZH2 at the endogenous FOXA1 promoter [9]. FOXA1 promoter in control MDAMB-231 cells was significantly enriched with EZH2 whereas no recruitment was observed in APLF-kd cells (Fig. 3h) (Additional file 1: Figure S9G). No change in the global level of H3K27me3 was observed in control and APLF-kd MDAMB-231 cells (Additional file 1: Figure S9H). It should be noted here that EZH2 and EZH1 functions are complementary and sometimes redundant. RNA-seq analysis of TCGA samples [5] demonstrated positive correlation of APLF with EZH2 while negatively correlated with EZH1 (Additional file 1: Figure S7B) So upon downregulation of APLF in MDAMB-231 cells, contrasting EZH2 and EZH1 level might have resulted in an unaltered global H3K27me3 level. But, APLF-downregulation significantly reduced the fold enrichment of H3K27me3 mark at the FOXA1 promoter in comparison to control MDAMB-231 cells (Fig. 3i). Loss in H3K27me3 renders an open chromatin resulting in increased expression of the gene. Thus, absence of EZH2 recruitment followed by loss in repressive H3K27me3 level, upon downregulation of APLF resulted in enhanced expression of FOXA1.
Conclusion
Here, we provided novel evidence for enrichment of APLF in breast tumors, which could regulate metastasisassociated EMT in invasive breast cancer (Fig. 3j). (See figure on previous page.) Fig. 3 APLF regulate EMT. a, b Control and APLF-kd MDAMB-231 cells were investigated for the expression of different genes implicated in EMT. mRNA and protein were extracted and analyzed for the expression of genes by qRT-PCR and western blot respectively. c Chromatin Immunoprecipitation (ChIP) analysis was performed with control and APLF-kd MDAMB-231 cells for the recruitment of MACROH2A.1 at EMT-specific gene promoters. Enrichment of chromatin fragments was measured by qRT-PCR using Sybr green fluorescence relative to a standard curve of input chromatin. IgG was used as the negative control [1]. d Co-expression analysis at the mRNA level between APLF and FOXA1 in samples from TCGA study [5]. Co-expression analysis demonstrated maximal negative correlation of APLF with FOXA1 expression supported by a Pearson score of − 0.48. e Protein was extracted from control and APLF-kd MDAMB-231 cells and analyzed for the expression of FOXA1 by western blot. f ChIP analysis was performed with control and APLF-kd MDAMB-231 cells. The plots represent the recruitment of FOXA1 at CDH1 promoter. IgG was used as the negative control. Enrichment of chromatin fragments was measured by qRT-PCR using Sybr green fluorescence relative to a standard curve of input chromatin. g Expression of EZH2 in control and APLF-kd MDAMB-231 cells at protein level was analyzed by western blot. h ChIP analysis was performed with control and APLF-kd MDAMB-231 cells. The plots represent the recruitment of EZH2 at endogenous FOXA1 promoter. IgG was used as the negative control. I. Same set of cells analyzed in H were investigated for the incorporation of H3K27me3 mark at endogenous FOXA1 promoter. The graph represents the fold enrichment with respect to the input. IgG was used as the negative control. j. Model depicting the mechanism responsible for downregulation of mesenchymal genes and upregulation of epithelial gene CDH1 in response to APLF downregulation in TNBC MDAMB-231 cells. Error bar = S.E.M for three independent experiments. Statistical analyses were performed using Student t-Test function, *p < 0.05, **p < 0.01
|
2018-03-28T09:08:10.593Z
|
2018-03-26T00:00:00.000
|
{
"year": 2018,
"sha1": "be15a0aa2bac50331bc7149624e36658cf4b8643",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-018-0826-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8c1d7594e38ec28bb1c350fc065af0d832240aa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
212408972
|
pes2o/s2orc
|
v3-fos-license
|
Designing a Theoretical Integration Framework for Mobile Learning
Abstrac t—New technologies are rapidly changing mobile learning and making it difficult to control. In addition to educational factors and learning content, a modern mobile learning system must take into account the technical and personal aspects of learning, the devices and aspects related to its evolution and interoperability. Teaching on the other hand has also evolved involving more flexibility in tasks and learning stages, thus using modern technologies that offer more alternatives now. In addition, such tasks may be specific to the learning content as well as the learning context or furthermore the learner's environment. Traditionally, mobile platform design relies on the skills of a mobile de-veloper whose knowledge allows him to design mobile applications that are useful to users. But with mobile learning, the design phase involves more than just mobile development skills. For example, if you are designing a platform for practical work, the instructors responsible for the training should be involved. However, the empirical results show that educators do not integrate technology effectively into their curricula. To enable these instructors to develop mobile learning platforms, it is important to facilitate their integration through a theoretical model that will take into account all the ingredients necessary to complete this learning and to balance them in order to ensure its efficiency. In this study authors used a thematic synthesis methodology to present a framework for mobile devices integration in learning. They focused on three models that they think are the most cited in the field of ICT (information and communication technologies) integration in learning. The five-axis framework consists of enriching the TPACK framework (Technological Pedagogical Content Knowledge model in order to more precisely address mobile learning by cover-ing the following parts: pedagogy, content, mobile technology, learning environment and learner’s profile. It describes relatively in depth the various factors involved as well as the effective interconnection to be ensured to achieve an optimal and efficient integration of m-learning. Balancing those five parts will be a matter of plural reflection when designing or consulting on a mobile learning platform.
Introduction
As its transcending all aspects of learning, mobile learning has been defined differently by researchers. Shorten in the literature as M-learning or mlearning, it represents any kind of learning that occurs when the learner is not at a predetermined fixed location, or learning that occurs when the learner takes advantage of the learning opportunities offered by mobile technologies [1]. Many important opportunities are provided to users through mobile learning as ownership and personalized control of learning process, learners are more comfortable with their own mobile device and less preparation is needed for learning [2] [3]. Despite all these advantages, there is major empirical evidence proving that mobile learning has been poorly used in many education sectors and research have tended to be more centered about the technical aspects of the tools and applications than of the learning approach itself. Integration frameworks for mobile learning will enhance its usage, taking to account all aspects of this learning method including learner's perception about it.
This paper investigates different theories and scientific evidence of technology integration in learning to present an m-learning integration framework. First, a brief exposition of the two major scientific current of mobile learning definition is presented followed by a review of technology integration frameworks in general then the ones centered specifically on m-learning. While acknowledging all identified features in other frameworks as important in mobile learning, an enriched model based on the TPACK framework is proposed highlighting a new unique combination of distinctive characteristics of current mobile pedagogy to bring a more detailed insights to the literature on m-learning. As a conclusion a Webview rendering architecture is finally presented to explore the potential experimentation of this theoretical framework.
Background
Several descriptions of m-learning are present in the literature, but they all take into consideration the close link between the use of mobile devices and learning: the learning process mediated by a mobile device [4]. M-learning can be identified for this paper as a method of learning that enables learners to access learning materials anywhere and anytime using mobile technologies and the Internet [5] [6] [7]. However, how to integrate these devices into learning is a thorny issue [8].
Designed to make it easier for those planning to integrate mobile applications into higher education MIT has developed the MIT mobile framework for educational institutions [9]. Moodbile is also a framework helping integration of multiple learning applications into learning management systems [10]. As valuable and informative as they are, these frameworks focus precisely on the integration of technology into other technological systems and not on broader aspects such as educators, learner profiles or contexts of learning.
Many researchers [11] [12] [4] have used activity theory to analyze individuals' development practices and processes, while considering individual and social influences in the use of mlearning. Uden has developed a framework for mobile applica-tion design for mlearning. While drawing partly from Vygotsky's work on mediation and the proximal developmental area, Koole has designed the Framework for the Rational Analysis of Mobile Education (FRAME). The work of Kearney et al extends Koole's framework, including an understandings of "mobile pedagogy", which is based on the socio-cultural understandings presented in her model. For instructors, these frameworks do not offer a solid support considering how they should proceed with integrating m-learning into the curriculum.
In line with the work done in the field, we have been interested in the general frameworks for the integration of technology into learning as a base of work. Three initial frameworks have been identified for further study: the Technological, Pedagogical, and Content knowledge (TPACK) framework by Mishra and Koehler [14], the i5 framework by Groff and Mouza [13] and the Substitution, Augmentation, Modification and Redefinition (SAMR) framework by Puentedura [15].
The i5 framework
Essentially at the origin of this framework, Groff and Mouza (2008) discussed six central factors with their different variables. These factors interact with each other creating barriers to the successful integration of technologies into learning. The factors are: • Research and policy factors • Department / school factors • Factors associated with the teacher • Factors associated with the enhanced technology project • Factors associated with students • Factors inherent in the technology itself. Groff and Mouza (2008) The authors of the model explain the teachers' inability to handle all these factors, although they are all important. So, they focused on the four factors that can be influenced by instructors through their i5 model which is a tool to help the mainstreaming of technologies in learning.
The SAMR framework
The SAMR model is schematized according to four main layers with two margins of evolution of learning. The first is a margin for improvement of the learning practice, with two layers (Substitution and Augmentation). At this level the technology is integrated to replace an old method without changing the functional aspect of the activity or improve the practice using the ease of technology introduced. The second margin is a transformation phase of educational activity. Indeed, represented in two layers (Modification and Redefinition), the learning activity is transformed this time through technology by changing its conceptual nature or by allowing the accomplishment of an entirely new tasks, inconceivable without the integrated technology.
The TPACK framework
Based on the model of Shulman, the TPACK framework "the Technological, Pedagogical, and Content Knowledge" by Michra and Koehler, presents three circles instead of two in the original model. The first two circles being pedagogy and content, Michra and Koehler have added technology as a facilitator of learning. Interactions between and among these bodies of knowledge are equally important to the model, represented as PCK (pedagogical content knowledge), TCK (technological content knowledge), TPK (technological pedagogical knowledge) The three circles need to interact collaboratively to enable effective integration of technology into learning Since its first publication in 2006, TPACK has become one of the most powerful theories in technology integration in education as the complex components described above remain open to a large range of educational circumstances. The flexibility of this model can be visible in the multiple junctions of its spheres allowing researchers to adapt it to various contexts and cases.
Methodology
In order to conceive our new m-learning integration model, authors adopted a thematic synthesis which allowed us to prioritize the transversal data collected previously through a manual search, including terms such as "mobile learning", "m-learning", "m-learning framework", "ICT integration", "technology in learning". Inspired by the method in Ref [16] and in the same scope of Crompton's work on mlearning integration frameworks [17], the researchers proceeded in three steps explained below: • First, authors identified the different themes and fields of our research from the studies collected before. Subsequently, they translated these results into a trans-verse metric. At this stage the codification is rather descriptive but still similar to the texts previously contained in the studies reported. Next, they used support software for qualitative and combined research methods • The second step consisted in organizing the identified thematic codes into descriptive themes in order to "develop and articulate relationships between the themes and associate conceptually similar themes with one another" [18] • The last step was to generate analytical themes. At this stage, the synthesis has crossed the limits of the initial content of the original studies proposing new perspectives and conceptualizations
Findings and Discussion
The thematic synthesis made it possible to identify new associations and conceptions between the different frameworks that we studied and considered as an entry into our new mobile learning integration framework. The framework proposed below is an enrichment of the TPACK model also inspired by Koole's in [12] and Kearney et al.'s [4] frameworks insofar as the close interaction of the new spheres identified can be considered as an efficient integration of mobile technology into learning.
Pedagogy: Method
Mobile technologies can enable the development of innovative pedagogical practices such as student-centred pedagogies as well as several communication and problem-solving skills along with critical thinking skill [19]. The teaching method is the starting point of this 5 axes mlearning integration framework. With its influence on all other factors, the teaching method will guide the choices of an m-learning platform as well as the required levels of quality and performance. All means will decline themselves in service to this major factor to ensure the homogeneity of the platform with the educational purposes behind its design. The pedagogical method represents the way in which learning will be conveyed, for example: collaborative learning, problem-based learning, experiential learning.
Laru insisted on pedagogically grounded instructional design to turn mobile technologies into effective tools for learning and collaboration [20]. The examples of mobile learning related to theories of learning are exposed in table 1 below [21]. Educators should be totally aware of the consequences of their choices in this stage. All pedagogical methods don't fit necessarily all contents or even all learners [22]. This sphere should be considered wisely along with the two adjacent ones in the process of designing new forms of teaching and learning through mobile technologies [23].
Nevertheless, another important factor at this level is the readiness of instructors to embrace the mobile technology as a tool for learning not just a support for learning's content. Instructors who are not familiar with mobile technologies will not be able to conceive learning activities through it or at least effective ones. The pedagogical method adopted in mlearning platforms should be open to adaptations through the progress of designing all the spheres even if it's the main influence on the global conception.
Content
In this sphere, the same considerations of TPACK framework are noted. Content is the material intended to be taught to the learners, it must be compatible with the learning policy as well as the means made available. For instance, middle school courses are very different from undergraduates. Same as subjects of these courses, teaching science is very different from arts or history. The knowledge levels, theories and practices are designed differently.
The pedagogical content knowledge by Ref [24] covers much of the considerations at this level. It addresses "the core business of teaching, learning, curriculum, assessment and reporting, such as the conditions that promote learning and the links among curriculum, assessment, and pedagogy".
From a basic or traditional content to a scenario or practical laboratory work, the teaching subject generates considerable design choices that can sometimes be a hindrance to integration. However, standardization of similar content can be a valuable asset to integrating technology into learning. This circle is generally thought and stopped at the same time as that of the teaching method in order to conceive strong relations and interactions guiding the other considerations for platform design.
Mobile technology
For its part, technology is the nerve of the expected system. By seeking to integrate it in an optimal and reasonable way, the technology must be well thought out and well framed. Especially with mobile technologies, making full use of its great potential for flexibility and ease of use is very delicate, which can sometimes be perceived as a complicated task by the instructors and automatically delegated to the technology specialists who will then always be consulted for platform administration and technical design. It is very important to mention that this sphere comes after pedagogy and content design. Educators have to first think about a quality course plan and then identify the mobile technologies to support that course. The use of mobile devices should not be the main purpose of the sessions plan; instead, it should be a good tool for making it work [25].
Wang, Wu, and Wang indicate in Ref [26] that mobile learning platforms should be user-friendly, easy to use, and intuitive to be appropriate, engaging, and accommodating to learners. Even their motivation can be frustrated if they encounter technological problems or if they are not attracted enough by the platform [27]. As observed by Ng and Nicholas in Ref [28], students became less engaged and excited. They thought that mobile technology didn't help them learn better or facilitate learning or even made it more interesting. The researchers find that students' statements about the effectiveness of a mobile learning program decreased between the start of the program and 12 months later.
In a way, the technology in general in such platforms guarantees the level of quality that will be presented to educators and learners. However, mobile technology is now experiencing a remarkable growth in terms of innovation which implies a complexity also for the design of mobile learning platforms. A multi-criteria analysis and advanced comparative study between M-learning development approaches was conducted before which can be of a great help to instructors [29].
To summarize, this circle encompasses all the technical part of the systems namely the platform architecture, the mobile development approach and the extensions and technical features.
Learning environment
The fourth factor that comes into play is the learning environment, which is all aspects of the learning context. This term is very important to note in a mobile learning, since it is an integral part of the identity of the latter. We share the same definition presented by Dey in Ref [30] concerning the context which is «any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves». However, other terms are also at the same level of importance and must be considered equally namely time and space, after all they are the main elements that constitutes the context. Kearney et al. in Ref [4] placed the use of time and space at the centre of their framework, shedding light on several interesting interactions of collaboration, authenticity and personalization. But they also focused on context same as Koole in Ref [12] did, highlighting it in her FRAME model; she think that mobile learning experiences happen within a context of information. A dynamic context that encompasses primary, secondary and higher education or corporate learning environments, as well as formal and informal learning to classroom or distance learning and field studies [31].
According to Rikala in Ref [27] Mobile learning can expand the learning environment meaningfully into authentic contexts, but "the challenges in creating a mobile learning environment may lead to situations in which the devices are employed only for enhancement-level usage where a computer, booklet, or handout is replaced with a ready-made application". He stated that decisions concerning the learning environment can have direct influence on the formality and spontaneity of learning experiences. Chu in Ref [32], for instance, argued that mobile learning in authentic contexts is not always a great success. He thinks it is important to design different learning strategies and to invent new ones that consider the particularities of mobile learning.
Learning environment constitutes the core of a platform design that's what make its creation challenging. laru's study in Ref [33] highlights several important elements for the success of a mobile assisted collaboration and the first of them is the careful design of the learning environment for group interaction, the second is the provision of scaffolding and also support from educators. Mobile learning platforms can offer a variety of services; therefore coordination is crucial, linking different contexts and optimizing learning. Big part of that is initiated through context-awareness.
Schilit, et al. in Ref [34] introduced first the term "context-aware", they think that context-aware applications can simply adapt to the context. Byun and Cheverst in Ref [35] define it as a system that can extract, interpret and especially adapt to different contextual information. Generally in literature, context-aware is mentioned as context-sensitive, situated, contextual, adaptive, located, etc.
Learner's profile
Part of the bigger picture that is learning context, the learner profile is a key element of context consideration. This last circle can be considered as an extension of the previous one thus closing the aspect of the context in its entirety. The learner pro-file falls into the second category of learning context modelling, being divided in general in two parts which are the learning context and the mobile context [36]. The learner profile may include and is not limited to [37] [38]: • The competency profile of the learner: with all the knowledge, skills and possible attitudes, role • The semi-permanent personal characteristics of the learner: containing the learning style, the different needs and potential learning interests, physical disabilities or other personal aspects Yau and Joy in Ref [39] proposed a personalized mobile learning application based on m-learning preferences where learner profile consists of an initial simple questionnaire which is generated on one-time basis for learners before they commence with their learning activities in order to get their m-learning preferences. Data are stored into the application and an option to change preferences is allowed although generally they are more likely static and set all for once. They adopted three main preferences to be considered as inputs, namely location of study, level of noise/distractions and the time of day. All of them will be appreciated based on three levels (strong, medium or weak) describing the learner feelings towards them. The questionnaire conducted in their study were very similar to Felder and Silverman in Ref [40] and Honey in Ref [41] learning preferences/styles questionnaires designed to return the learning styles of learners prior to the use of mobile learning or web-based applications.
Personalization mechanisms are one of the main considerations at this stage also. The latter makes it possible to manage features such as self-regulation, personalization of content, learner's choices and tendencies. This concerns can be found in both [12] and [4] frameworks.
Experimentation
To further examine the efficiency of this new framework, system architecture is proposed to help adapt an existing LAMS (Learning Activity Management System) platform to the mobile environment through this latter. As an integral part of an LMS (Learning Management System) generally, these systems can very well run separately too. This type of platforms allows building interactive game-informed content, educational activities and simple educational games. This experimentation will be hosted as an R&D project of Mohammed VI University of health science since they expressed their real need to change the old way of clinical reasoning teaching and learning.
The main objective through this experiment is to validate the new framework for mobile learning integration through the evaluation of its effectiveness to support creation, adaptation and improvement of mobile learning platforms. Exploring new fields of exploitation of mobile technologies in learning is also part of this experiment. Throughout the period of research, mobile learning was rare or poorly stated on literature addressing healthcare sciences learning. So that concern motivated this experimentation also to enrich that field of research.
Why a LAMS and not LMS?
Simply because the idea of adapting and integrating mobile devices and educational applications with an LMS has already been explored [42], and since they have practically similar architectures, the idea of mobile adaptation for LAMS presents more interesting challenges to such a mobile implementation.
OpenLabyrinth
Our choice fell on a very interesting platform called OpenLabyrinth, which is an online Activity Management System that allows users to create interactive and informative educational activities, such as virtual patients, simulations, games, mazes and algorithms. It is mostly compared to a flexible online story, similar to the Choose Your Own Adventure style of book. Depending on the decisions the learner makes or the path he or she chooses, the consequences will be different.
Launched at the University of Edinburgh in 2004, the project OpenLabyrinth aimed to change in a way the expensive and cyclical development of computer-based learning packages using tools such as Flash, Adobe's Director and Authorware. The main purpose behind it was to design an intuitive and easy to use tool, capable to support as many types of case-based activity designs as possible requiring minimal time for new case-based activities development [43].
This LAMS offers very elaborated functionalities like a visual editor of virtual patient case in the form of a labyrinth which schematizes the different potential paths for the learners, and like an LMS, OpenLabyrinth offers authentication as well as a discussion forum and several other components.
Method
Having the web-based version submitted to tests by administrators, educators and learners. Data was collected through a post-survey and interviews to identify specific needs for mobile adaptation design. The results were aggregated into several considerations, which were then integrated based on the 5 axes m-learning integration framework into a system design to form mobile-based adaptation. The steps followed in the research model of the Mobile Clinical Cases Learning System included: Pedagogy (Method): Virtual scenarios are core concept of the OpenLabyrinth platform. Based more generally on a sort of PBL (problem-based learning) evolution, SBL (Scenario-based learning) is a complex combinations of learning experiences, resources and tools constructed in a specific way in order to address the learners' needs. Typically, scenario-based learning (SBL) can be defined as support of active learning strategies such as problem-based or case-based learning through interactive scenarios. Learners will choose an individual path through an ill-structured or complex problem that should be solved. They will feel the real-world context through a well designed storyline in which they have to apply their problem solving skills, prior knowledge of the subject and critical thinking. Many feedback opportunities and hints will be provided depending on the decisions they will be making at each level in the process.
Content: It had to be clear that OpenLabyrinth can tackle and explore any educational problem and not just virtual patients. However, virtual patients will fulfil the need expressed by the great majority of educators and learners interviewed as mentioned before. In this case study, virtual patient will replace the old way of teaching clinical reasoning which was a classic power point. Another work of collection and adaptation to the platform was performed with the help of educators to help designing scenarios. A virtual patient is "an interactive computer simulation of real clinical scenarios for the purpose of training, education or medical evaluation" [44]. Many designs are possible through the platform, however two of them are the most common and powerful when it comes to healthcare science teaching and learning namely: linear and branched designs. For instance linear designs can facilitate medical protocols learning while branched ones could enhance clinical reasoning throughout advanced complexes scenarios. Three themes are mostly chosen to deliver virtual patients designs; storytelling, simulation and gaming. Storytelling allows learners to explore roles and patterns, which progresses over time. As for the simulation, it ensures grounding in real context. Gaming manages the means by which virtual patients offer to learners the possibility to try different strategies to solve the case respecting a welldefined set of rules.
Mobile Technology: At this stage more freedom was possible considering the survey results early mentioned. although, three large considerations had to be respected for the design of the mobile application to meet the initial expectations namely ease of use, interoperability and full access to all or the majority of the web-based features of the platform. A multi-criteria analysis and Advanced Comparative Study between Mlearning Development Approaches was conducted before, helping instructors choose between native, web and hybrid development [29]. The better scoring approach will be used for this mobile adaptation which is hybrid conception. Considered as one of the most powerful frameworks ionic v2 was used to develop this application. Future administrators of the platform proposed that design of the application should be very simple since it's just an adaptation for mobile and not a complete conception. So a decision was made to keep everything as it is on the web based and design a webview simply that will be rendering the content and making it easy to encapsulate navigation through the platform for different mobile operating systems. Architecture below explains the approach adopted for the Mobile Clinical Cases application. Learning Environment: The historical context for teaching clinical reasoning has been a major point of divergence at this level of design. Being less skilled in ICT manipulation, a large part of the educators supposed to become referents of their virtual patients cases were rather sceptical for a formal adoption of this type of learning. They were not comfortable to switch directly to learn with the application on class, they were discouraged when they discovered the huge efforts they will be making to adjust their old power point presentations, also some of them were simply not ready to change their way of teaching. In the other hand learner's were very enthusiastic to embrace this new way of learning, especially when they discovered that pathways are different for each player and that they can play the cases as much as they wish. The idea of discussing the cases on the forum was a very positive point too. The final decision was made to keep the learning informal at least for a testing period and maybe adopt it formally later for fifth year medical students and third year of nursing as a tool to keep their knowledge updated when they are on clinical rotations. All contextawareness aspects will be handled later on as an improvement process of the platform, after gathering enough data. Source code will be enhanced on server side and new distributions of the application will be offered as simple upgrades for users.
Learner's Profile: As stated above, it was decided to target medical students as well as nursing students. Considering the differences of knowledge levels as well as the personalized content for each profile, the management of the learner's profile is already present on the server side mostly on a standard manner (classic authentication with login and password). Many categories of profiles are presented for the administrator to choose from, offering specific access for each learner. Groups can be formed and assigned a number of virtual patients, tracking results is then possible for all groups separately. Part of the future upgrade will address learner aspects considering context-awareness.
Results
Directly after authentication, users land on the home page shown in figure 6 which can give access to all the other pages: List of public and assigned cases, management of the profile, Forum & discussion, personal collection of VP cases, Scenarios assigned. The mobile clinical cases application offered a new workflow on the platform. The first step in this learning process is the creation of the different user profiles by the main administrator. Then, referring authors create cases and build content with all the necessary resources, such as CT scans, blood tests, multimedia. Administrators as well as referring authors have the right to create new cases, edit respective data and delete them. Authors although, have only permission to display data of their own cases. The case may also represent a scenario (set of cases), which may include several related learning topics. Learners assigned to different cases individually or organized in groups can start the scenario only after completing the pre-test. Then, they play the case until the final result of their clinical reasoning by making a precise diagnosis of the case. The discussion forum remains accessible to those who have not passed the pre-test to discuss and collaborate on the case and the concepts discussed in it. Finally after finishing the post-test which is approximately same as the pre-test, learners can check the summary of their decisions on the entire pathway and the feedback of the quiz in order to improve their skills on the subject the next time. Figure 7 show this new workflow while figure 8 presents the navigation diagram.
Conclusion
Mobile learning introduces great opportunities for the enhancement of collaborative learning by offering more flexibility and personalization of learning and allowing it to be more student-centred. However, this type of learning is still underestimated or not used effectively into curriculums. The five axes m-learning integration framework aims to propose a thinking guide for mobile devices integration in learning curriculums. This framework can be correlated with other models such as the ADDIE model (Analysis, Design, Development, Implementation, and Evaluation) in order to define in details the aspects at stake for optimal integration. The main purpose of this study is to help educators and instructors with poor technology knowledge to consider every aspect involved in mobile learning systems integration. The experimentation is still running while researchers are collecting data to further ameliorate the mobile application. Feedback from researchers and educators on the impact of this new framework would be very valuable in order for authors to evaluate the efficiency of their model in various fields of learning. Further studies will be conducted, to first analyze users behaviour and data of the application, in a second time the limitations and the amelioration of the framework if there is anything omitted before.
|
2019-12-19T09:12:12.323Z
|
2019-12-18T00:00:00.000
|
{
"year": 2019,
"sha1": "cedefa3b681eb11d204b15d1ff535452f04290bf",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jim/article/download/10841/6275",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "35a0148b7d631fabdb8a98604f08d9a88e76d6ee",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
225934406
|
pes2o/s2orc
|
v3-fos-license
|
Anhysteretic Magnetization for NiFeMo Soft Magnetic Compacted Powder
Anhysteretic Magnetization for NiFeMo Soft Magnetic Compacted Powder D. Olekšákováa,∗, P. Kollár, M. Jakubčin, P. Slovenský, Z. Birčáková, J. Füzer, M. Fáberová and R. Bureš Institute of Manufacturing Management, Faculty of Manufacturing Technologies with the seat in Prešov, Technical University of Košice, Bayerova 1, 080 01 Prešov, Slovakia Institute of Physics, Faculty of Science, P.J. Šafárik University, Park Angelinum 9, 041 54 Košice, Slovakia Institute of Materials Research, Slovak Academy of Sciences, Watsonova 47, 043 53 Košice, Slovakia
Introduction
Nickel iron alloys (permalloys) are important in many areas of science from material research and engineering [1] to the planetary sciences (founded in most meteorite classes) [2]. High permeability and low magnetostrictive properties of permalloys are widely used in magnetic cores for electrical equipment applications. They are also useful as magnetic shielding materials [3]. Nickel iron alloys are still attractive systems to study because their applications being one of the primary concerns in science and technology of materials [4,5].
In the present work, we study Ni-rich Mo substituted permalloys. Generally, Mo enhances material permeability even if a small amount is added [6]. High permeability can be also achieved by reducing the amount of Ni. In turn, Mo increases the electrical resistivity of permalloys, and reduces eddy current losses at the same time. Iron nickel molybdenum alloys (called supermalloy) show excellent high frequency characteristics [6,7].
The appropriate structure of the supermalloy (produced usually in the form of thin sheet) with initial permeability much larger than that of pure iron arises after proper heat treatment. For some applications the form of a sheet is not suitable, therefore it is logical to try to prepare such material in another form, for example in a form of a ring. This shape would be more convenient for construction of some type components for electronic devices [6]. One of the methods of preparing 3D samples is the compaction of powder obtained by the mechanical milling, or mechanical alloying [8]. * corresponding author; e-mail: denisa.oleksakova@tuke.sk In this paper, we demonstrate the matching to the anhysteretic curve of experimental data for compacted Ni 80 Fe 15 Mo 5 (wt. %) based on the Jiles-Atherton model with an additional parameter. The influence of annealing on parameters from the Jiles-Atherton model is also examined.
Experimental
Small chips Ni 80 Fe 15 Mo 5 (wt. %) with size of 2 mm were prepared from the sheet by a rotary drill grinder mounted in a lathe. The chips were milled in a planetary ball mill Retch PM100 in steel vial with steel balls for 10 min with BPR ratio 10:1. The morphology of the chips and the milled chips (powder) was scanned by optical microscope (Nikon Epiphot 200), and scanning electron microscope (TESCAN VEGA3), displayed in Fig. 1. The milled powder was sieved to obtain size fraction from 100 µm to 300 µm. The powder particles were mechanically smoothed [9], and compacted by uniaxial pressure of 700 MPa at the temperature of 410 • C with durability of 10 min, sample A. The B sample was prepared by the same way, only after compaction the sample was annealed at 1100 • C for 10 hours in hydrogen atmosphere. The dimensions of resulted bulk samples were: height of 2.9 mm, outer diameter of 24 mm, and inner diameter of 18 mm. The detailed samples parameters including density are given in [9].
Anhysteretic curves
The anhysteretic magnetization curve (also called "ideal magnetization" [10]) is a concept used extensively in the characterization of magnetic materials. Anhysteretic magnetization is defined as the "thermal equilibrium" curve measured by the cooling the sample from (889) the Curie temperature in an incrementally increased direct current DC field [11,12]. The most important utilization of anhysteretic curves is in magnetic recording, where the superposition of high frequency field (also called "bias") on the signal is used to overcome the hysteresis of the recording medium [12].
Anhysteretic curves (Fig. 2) were measured by modified DC hysteresisgraph. The AC field with decreased amplitude was applied at every measured point along the magnetization curve by the third toroidal winding to obtain experimental points of anhysteretic curve [13,14]. In the figure there are the values of external magnetic field H ext (produced by third windings on ring-shaped sample) on x-axis.
Inner demagnetization factor
Since the samples used for measurements were prepared by compaction of powder, then each powder element as a source of demagnetizing field reduces the value of internal magnetic field H int . It is expressed as follows: where H d is demagnetization field, N d is inner demagnetization factor, and M is the magnetization. The demagnetization factor was determinated by the linear part of anhysteretic curve for H ext → 0. Demagnetization field in the sample depends on numerous parameters, such as shape and size of particles, and porosity [13]. To determine the demagnetization factor one can use formula [13] where B is the magnetic induction, and B/H ext is the slope of the linear part of anhysteretic curve. The values of demagnetization factor of compacted Ni 80 Fe 15 Mo 5 (wt%), before the annealing and after the annealing at 1100 • C, are in Table I. The value of the demagnetization factor for annealed sample is significantly lower than that for non-annealed sample due to the paths creation between powder elements for magnetic induction.
Jiles-Atherton model
Nowadays, the anhysteretic magnetization curves are often processed with the Jiles-Atherton model of magnetic hysteresis [15]. This one of most popular magnetic models is suitable for design and simulations of electrotechnical and electronic components with soft magnetic cores [16]. The model itself is based on the anhysteretic curves, which are derived using a mean field approach, and where the magnetization of any domain is coupled to the magnetic field H int and the bulk magnetization M [15,17].
According to Jiles-Atherton model [15] the energy E of a domain with the magnetic moment m of feromagnetic material in the presence of the magnetic field H int is: where µ 0 is permeability of vacuum, H E is the effective magnetic field, and H I is the magnetic field representing inter domain coupling. Typically, H I determines the shape of the anhysteretic magnetic curve, and according to the Jiles-Atherton model [15] it is where α is a constant mean field parameter. Now, (3) can be rewritten as According to the Jiles-Atherton model, which is based on the idea of anhysteretic magnetization M of ferromagnetic material, after applying Maxwell-Boltzman statistics (distribution of magnetization vectors of domains) and introducing the modified Langevin function L, we can write: where k B is Boltzmann constant, M s is the saturation magnetization, and K 1 , K 2 are parameters, which can be fitted according to (7) based on experimental data of anhysteretic curves for two compacted Ni 80 Fe 15 Mo 5 samples (wt%) before the annealing and after the annealing at 1100 • C. Parameter K 1 depends on the room temperature T , and the average magnetic moment of an effective domain m. Definition of K 1 includes the Boltzman constant k B , as well as the magnetic constant µ 0 . Parameter K 2 is consistent with a constant mean field parameter α in (4).
The fitted parameters K 1 , K 2 , and calculated parameters m, α can be find in Table I. The values of m are unexpectedly low, which is explained in [18]. The comparison between measured anhysteretic curves for both samples with theoretical ones obtained with m and α parameters, is depicted in Fig. 3.
Conclusions
Experimental data of anhysteretic curves (before the annealing and after the annealing at 1100 • C) of the compacted powder Ni 80 Fe 15 Mo 5 sample with particle size from 100 µm to 300 µm, were compared with Jiles-Atherton model for ferromagnetic material. At first, however, the parameters of Langevin function were determined. The annealing causes a significant decrease of the parameter m (the average magnetic moment of an effective domain), and a slight decrease of the parameter α (the mean field parameter). It is treated as a consequence of the creation of the paths for magnetic flux between powder elements in the compacted material.
The decrease of parameters values leads to stronger coupling, and denser effective domains after annealing. We can summarize that the presented model matches the experimental data with very good accuracy.
|
2020-07-02T10:12:46.862Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7c205273a197cd032e040fdf8e0aa84768c8284f",
"oa_license": null,
"oa_url": "https://doi.org/10.12693/aphyspola.137.889",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d4c668fee0e056a7080dcdde236e16a8dce704e6",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
15976871
|
pes2o/s2orc
|
v3-fos-license
|
Leadership and Path Characteristics during Walks Are Linked to Dominance Order and Individual Traits in Dogs
Movement interactions and the underlying social structure in groups have relevance across many social-living species. Collective motion of groups could be based on an “egalitarian” decision system, but in practice it is often influenced by underlying social network structures and by individual characteristics. We investigated whether dominance rank and personality traits are linked to leader and follower roles during joint motion of family dogs. We obtained high-resolution spatio-temporal GPS trajectory data (823,148 data points) from six dogs belonging to the same household and their owner during 14 30–40 min unleashed walks. We identified several features of the dogs' paths (e.g., running speed or distance from the owner) which are characteristic of a given dog. A directional correlation analysis quantifies interactions between pairs of dogs that run loops jointly. We found that dogs play the role of the leader about 50–85% of the time, i.e. the leader and follower roles in a given pair are dynamically interchangable. However, on a longer timescale tendencies to lead differ consistently. The network constructed from these loose leader–follower relations is hierarchical, and the dogs' positions in the network correlates with the age, dominance rank, trainability, controllability, and aggression measures derived from personality questionnaires. We demonstrated the possibility of determining dominance rank and personality traits of an individual based only on its logged movement data. The collective motion of dogs is influenced by underlying social network structures and by characteristics such as personality differences. Our findings could pave the way for automated animal personality and human social interaction measurements.
Introduction
Groups that are not able to coordinate their actions and cannot reach a consensus on important events, such as where to go, will destabilise, and individuals will lose the benefits associated with being part of a group [1,2]. Decision-making usually involves some form of leadership, i.e. 'the initiation of new directions of locomotion by one or more individuals, which are then readily followed by other group members' ( [3] p83).
Several factors may give rise to the emergence of leadership. In some species or populations, leaders are socially dominant individuals (consistent winners of agonistic interactions [4]) and have more power to enforce their will [5]. For example, in rhesus macaques (Macaca mulatta) the decision to move is the result of the actions of dominant and old females [6]. Similarly, dominant beef cows (Bos taurus) have the most influence on where the herd moves. They go where they wish while subordinates either avoid or follow them [7].
Leaders could appear in species or populations without any dominant individuals, or independently from social dominance. Leaders may have the highest physiological need to impose their choice of action [1,3,[8][9][10], or they may possess special information or skill [11,12].
Finally, an individual of a personality type that is more inclined to lead or does not prefer following others may also initiate collective movements [13,14]. For example, leadership is associated with boldness in sticklebacks (Gasterosteus aculeatus) [15,16]. The investigation of the relationship between leadership and personality might reveal which personality types occupy particular positions in the leadership network, and conversely, network metrics could identify potential personality traits.
With this study our aim was to reveal potential links between leadership in collective movements, motion patterns, social dominance, and personality traits in domestic dogs (Canis familiaris). It is often assumed that domestic dogs inherited complex behaviours from their wolf ancestors (Canis lupus). The typical wolf pack is a nuclear or extended family, where the dominant/ breeding male initiates activities associated with foraging and travel [17]. However, family dog groups may consist of several unrelated individuals with multiple potential breeders. In large wolf packs with several breeders, leadership varies among packs, and dominance status has generally no direct bearing on leadership, but breeders tend to lead more often than nonbreeders [18]. Similarly, leadership in Italian free-ranging dogs interchanged between a small number of old and high-ranking habitual leaders. Interestingly, affiliative relationships had more influence on leadership than agonistic interactions [19].
Family dogs are often kept in groups (for instance, 33% of owners in Germany [20] and 26% of owners in Australia [21] have 2 or more dogs), however interactions within freely moving dog groups and their relationship with social dominance are still unexplored. The capacity of dogs to form robust dominance hierarchies is highly debated [22,23]. However, the reason for the inability to detect hierarchies might be due to methodological issues in certain cases, as instead of aggression patterns, submissive behaviours appear to be better indicators of dominance relationships in dogs [24].
To describe what characterises the collective movement of a group of dogs, and to investigate links between leadership, social dominance, personality [25], and characteristics of individual motion trajectories, we collected high-resolution spatio-temporal (1-2 m, 0.2 s) GPS trajectory data from a group of dogs and their owner during everyday walks. Directional choice dynamics and potential leading activity were assessed by quantitative methods inspired by statistical physics [26,27]. Personality and dominance rank of the dogs were measured by questionnaires completed by the owner. Because the capacity to form dominance hierarchies is likely to vary from breed to breed [28], we chose a group that contains multiple individuals of the same breed, the Hungarian Vizsla. The studied group is composed of five Vizslas (with two dam-offspring pairs) and one small-sized, mixed-breed dog.
Characteristics of the paths
A general overview of the GPS-logged trajectories (see Figure 1 and Video S1: our animation showing a 3-minute-long part of a walk) shows that the dogs run away from the owner periodically, then turn back and return to her, in a loop. Figure S5 shows a typical trajectory of dog V1. It can also be seen that they prefer running these loops or a part of them with one or more group members (see details in the Data Analysis). Given that the dogs' speed was significantly higher than that of the owner (1.5-3.7 times), this motion pattern allows dogs to cover a greater distance than the owner while also keeping the group together. We calculated several simple characteristics of the trajectories and performed an analysis concerning the returning events (Table 1 and Text S1).
The preferred running speeds of the dogs, the relative distances covered, and the distances from the owner were unique and consistent characteristics of an individual dog's path, while other characteristics (e.g, distance from dogs) were less consistent and/or distinctive (for details see Text S1).
Interactions
To extract information about the interactions between group members, we used a directional correlation analysis [26] with a
Author Summary
How does a group of family dogs decide the direction of their collective movements? Is there a leader, or is decision-making based on an egalitarian system? Is leadership related to social dominance status? We collected GPS trajectory data from an owner and her six dogs during several walks. We found that dogs adjusted their trajectories to that of the owner, that they periodically run away, then turn back and return to her in a loop. Tracks have unique features characterising individual dogs. Leading roles among the dogs are frequently interchanged, but leadership is consistent on a long timescale. Decisions about running away and turning back to the owner are not based on an egalitarian system; instead, leader dogs exert a disproportionate influence on the movement of the group. Leadership during walks is related to the dominance rank assessed in everyday agonistic situations; thus, the collective motion of a dog group is influenced by the underlying hierarchical social network. Leader/dominant dogs have a unique personality: they are more trainable, controllable, and aggressive, additionally they are older than follower/ subordinate dogs. Dogs are an ideal model for understanding human social behaviour. Therefore, we address the possibility of conducting similar studies in humans, e.g. walking with children and detecting interactions between individuals.
time window to quantify the fast, joint direction changes for all possible pairings of the dogs ( Figure 2A; Table S1; for more details see Data Analysis and Text S1).
We detected frequent short-term interactions and leading tendency differences between dog pairs within the group. The leading and following roles between interacting pairs were often changed during walks and between walks. To check the robustness of the interactions, the directional delay times were calculated for the first 7 and the second 7 walks separately for all pairs. High correlation was found (two-tailed Pearson correlation: r = 0.635, n = 15, p = 0.011), i.e. significant differences in leading tendency were detected over longer timescales. Calculated from a Gaussian fit to the peak of the relevant distributions ( Figure S8, Table S1) we found that dogs play the role of leader in a given pair about 50-85% of the time (57% to 85% when directed leader-follower relationships were found).
Based on the directional delay time values, we created a summarised leadership network ( Figure 2B). In the network each directed link points from the individual, which played the role of the leader more often in that given relationship toward the follower. We used this network to calculate leading tendency, which is the number of followers that can be reached travelling through directed links.
We also calculated 'active connections', which shows the number of how many interactions a dog has (with the number of edges a dog is connected with in the network). is not connected to any Vizslas, and so is not part of the network. This network is used to calculate leading tendency, which is the number of followers that can be reached travelling through directed links. (C) Dominance network between the dogs derived from the dominance questionnaire [29]. Each directed edge points from the dominant individual toward the subordinate one. The colours represent the context when dominance is evident: red: barking, orange: licking the mouth, green: eating and blue: fighting (see more details in Text S1). The nodes were arranged in the vertical direction in such a way that more edges point downwards than upwards between all pairs. doi:10.1371/journal.pcbi.1003446.g002 Relationships between the trajectory variables, leading tendency, dominance ranks, and personality traits Correlations between trajectory-based variables, leading tendency, personality traits (Jones, 2008, Table 2) and dominance rank (Pongrácz et al., 2008, Table 2) were calculated using twotailed Pearson correlation for the Vizslas only (n = 5) ( Figure 3) and also for all subjects (n = 6). We tested our data for normality using a Shapiro-Wilk test (p,0.05), and where a significant deviation from a normal distribution was found, we used Spearman correlations (indicated as r S ).
Our main aim was to investigate whether the leadership we defined based on the motion patterns had any connection with the social dominance.We found that the leading tendencies calculated from the GPS data significantly correlated with the dominance ranks gained from the dominance questionnaire [29] (r = 0.92, n = 5, p = 0.026). To support this result, we performed a comparison with a randomisation using all possible permutations, and this correlation value proved to be significantly higher than it was for the randomised cases. For more details see Text S1 and Figures SI11-13.
To find more correlations in our dataset of trajectory variables and personality traits, all 300 possible pairings were analysed. Note that due to the large number of variable pairs and the small number of dogs involved in the study, none of the p-values remain significant after correction for multiple comparisons (Bonferroni, Sidak or Benjamini-Hochberg procedure). But the correlations mentioned here were all significantly higher than the corresponding values of the randomly permuted cases.
The distance from other dogs correlated with the fear of dogs facet (r S = 0.92, n = 5, p = 0.028) and the excitability facet (r S = 0.92, n = 5, p = 0.026). Dogs that, according to the owner, avoid other dogs and seek constant activity maintained a longer distance from their group mates during the walks.
The time period of the returns (the average time duration between returning events) was found to be inversely correlated with the controllability facet (r = 20.82, n = 6, p = 0.046), and the dominance rank measure (r = 20.84, n = 6, p = 0.036). Dominant dogs who were more responsive to training returned to the owner more often.
The far-from-owner ratio (the time ratio of being relatively far from the owner, for more details see Text S1) correlated negatively with companionability (r = 20.87, n = 6, p = 0.024). Dogs that, according to the owner, seek companionship from people also like staying in the owners' proximity.
The preferred running speed correlated with the general aggression facet of the aggression toward people factor (r = 0.95, n = 5, p = 0.015). More aggressive dogs ran faster during the walks.
In addition to being correlated with dominance rank (mentioned earlier), leading tendency was positively correlated with: age (r = 0.91, n = 5, p = 0.032), responsiveness to training (r S = 0.92, n = 5, p = 0.028), controllability (r = 0.98, n = 5, p = 0.003), and aggression towards people (r = 0.95, n = 5, p = 0.013). These relations indicate that those dogs that have a tendency to take the leading role during walks are more aggressive and dominant, and they are also more controllable by the owner, based on the personality questionnaires ( Figure 3).
Discussion
By analyzing the GPS trajectories of freely moving dogs and their owner during walks, we found significant differences in simple path characteristics of the individual dogs. The preferred running speed of Vizslas ranged from 1.5 to 4.0 m/s (5.4-14.4 km/h), they covered a 1.8-3.76 longer distance than the owner during a walk, and the usual distances from the owner ranged from 16 to 20 m. These results might be useful for conservation managers in establishing areas where dog walking is prohibited [30] and may also help in designing parks, as dogwalking is a popular method for increasing human physical activity (for a review, see [31]).
A directional correlational analysis [26,27] revealed leaderfollower interactions between the group members. We detected a loose but consistent hierarchical leadership structure. Due to the dynamic nature of the pairwise interactions, role reversals did occur during walks and an individual took the role of the leader in a given pair in about 73% (ranging from 57% to 85%) of their interactions, where directed leader-follower relationships were found. This ratio is of similar magnitude to the case of wild wolf packs with several breeding individuals, where leaders led for 78% of the recorded time, ranging from 58% to 90% [18]. The role of initiating common actions is also frequently interchanged between guide dogs and the owner [32] and between dogs during play [33]. But over a longer timescale, differences in leading tendency remained consistent; thus decision-making during the collective motion was not based on an egalitarian system in our sample.
Although the existence of an overall dominance hierarchy in dogs is debated [23], and the Vizsla is a ''peaceful'' breed, which, compared to other breeds, rarely fights with conspecifics [34], we detected a dominance hierarchy via a questionnaire assessing agonistic and affiliative situations [29]. We found that dominance rank and leadership were strongly connected. Dogs who tend to win in everyday fighting situations, eat first, bark more or first, and receive more submissive displays from the others, and have more influence over the decisions made during collective motion.
The correlation between leadership and dominance is consistent with a trend in 'despotic' social mammals [5], but probably not characteristic in wolves with several breeding individuals [18]. In large wolf packs (with 7-23 individuals), breeding individuals lead during travels, independently from dominance status. But this situation is relatively rare, as the typical wolf pack is a nuclear or extended family, where the only breeding male leads the pack during travel [17]. Unlike wolves, the dog is a promiscuous species, and in a group, there is usually no single pair of breeders [22]. In our family dog group, the highest ranking dog (V2) was neutered, which may suggest that both leadership and dominance have little or no relationship with reproductive behaviour in family dogs, consistent with observations in feral dogs in India [35][36][37].
We also investigated the relationship between leadership and personality to reveal which personality types occupy particular positions in the leadership network. We found that leaders/ dominants were more responsive to training, more controllable, and more aggressive than followers/subordinates. Other data also suggest that dominance cuts across different contexts and is correlated with boldness, extraversion, and exploratory tendencies in several taxa [38], and assertiveness in wolves [18], but reported links between personality and leadership are rare [14].
Age was a reliable indicator of leadership and dominance. Several studies have reported a positive correlation between age and dominance [39]. Age-related dominance might be due to greater fighting skills (e.g. [40]) or enhanced possibility of forming alliances with other individuals, among other factors [41]. If rank acquisition is learnt at an early age with regular reassessments of dominance, younger dogs may remain subordinate, long after initial body weight differences have disappeared. In our group, both dams were dominant over their adult offsprings, and each adult Vizsla dominated the juvenile Vizsla, which supports the hypothesis that the acceptance of subordinate status within a dog group is probably mediated by conditioning.
Not only leadership and dominance, but movement characteristics were also related to personality. Fearful and excitable dogs maintained a longer distance from other dogs. More controllable and dogs returned to the owner more often, while less companionable dogs spent more time far from the owner. Surprisingly, more aggressive dogs ran faster during the walks. As male dogs harvest more game than females in preindustrial societies [42], and experimental evidence on mice suggests that testosterone increases persistence of food searching in rodents [43], higher speed might be related to testosterone levels. Note, however, that even the most ''aggressive'' score was relatively low in our sample (2.67 out of the maximum 8).
Social organization and social structure vary among populations [44], and in the case of dogs, they vary among breeds and groups [45], thus group decision-making processes are expected to vary accordingly [46]. The main limitation of our study is the low sample size. Observing other groups and breeds may provide different results. For example, the hierarchical network of sled dogs which work as a team with a lead dog [47] is more robust than that of our sample. It would also be interesting to investigate what happens with the leadership network if the owner runs or rides a bike, and her speed is comparable to the dogs' speed.
To summarise, by using GPS devices we found that the leader and follower roles are dynamically interchanged during walks, but are consistent over a longer timescale. The leader-follower network was hierarchical, and the dogs' positions in the network correlated with dominance order derived from everyday life situations. Leadership also correlated with age and personality traits such as trainability and aggression.
Our findings on the connection between variables extracted from GPS trajectory data, dominance rank, and personality traits could pave the way for automated animal personality and dominance measurements. As dogs are ideal models of human social behaviour [48,49] and social robots [50], the present study may also be applied to measure social interactions in humans, as in the case of parents walking with their children, or humans interacting with robots.
Ethics statement
Non-invasive studies on dogs are currently allowed to be done without any special permission in Hungary by the University Institutional Animal Care and Use Committee (UIACUC, Eötvös Loránd University, Hungary). The currently operating Hungarian law ''1998. évi XXVIII. Törvény'' -the Animal Protection Actdefines experiments on animals in the 9th point of its 3rd paragraph (3. 1/9.). According to the corresponding definition by law, our non-invasive observational study is not considered as an animal experiment. The owners volunteered to participate and gave written consent to the publication of the photos. Table 2. Photos of the subjects are presented in Figure S2, kinship is depicted in Figure S3.
Procedure GPS data were collected during 14 daily walking tours, each lasting about 30-40 minutes between 2 May 2010 and 25 November 2010. We analysed 823,148 data points. The highresolution GPS devices were attached to the dogs with ordinary harnesses (Figures S1, S2), while the owner carried one device attached to her shoulder. The 5 Hz custom-designed GPS devices had a time resolution of 0.2 s and previous independent tests with the same devices showed a spatial accuracy of 1-2 m ([4] -Text S1). Weighing only 16 g, and with dimensions of 2.5 cm64.5 cm, it is reasonable to suppose that the devices did not hinder the dogs' movements.
The group always walked on the same open grassy field, with the approximate dimensions of 50061000 m, near Budapest, Hungary (located 47u259170N latitude, 19u89450E longitude).
The task of the owner was walk continuously and with a constant speed as far as possible during the walks. The dogs were allowed to walk and run freely, and the owner called the dogs back to herself only when she noticed some kind of danger, which happened on just a few occasions. Graphical summary of the Procedure is presented in Figure S1.
Questionnaire surveys
The personality of the dogs was quantified using two questionnaires that were completed by the owner at the end of the GPS measurements.
(1) The Dog Personality Questionnaire (DPQ) ( [51]). DPQ was compiled from 1,200 descriptions culled from dog-personality literature, shelter assessments, and dog experts' input. A narrowed list was administered to more than 6,000 participants. Items were evaluated in terms of factor-and facetloadings, content validity, internal consistency, inter-rater reliability, test-retest reliability, and predictive validity. Convergent criteria favoured five factors, labelled as Fearfulness, Aggression towards People, Activity/Excitability, Responsiveness to Training and Aggression towards Animals. Narrower facets within each factor were also identified. The DPQ has a 75-item and a 45-item form, but we used the latter one ( Table 2). (2) The dominance questionnaire [29], to our knowledge, is the only questionnaire available, which was developed with the aim of assessing dominance. The questionnaire quantifies agonistic interactions between pairs of dogs. The owner had to answer four questions concerning each dog pairs: usually which one barks first when a stranger comes to the house (in a competitive situation, dominant dogs bark more [22], which dog licks the other's mouth more often (a submissive display, [52]), which one eats first when they get food at the same time and at the same spot (dominant animals have priority access to food, [4]), and which one wins fights (dominant animals are consistent winners, [4]). Dogs could receive 1 point for each question, and we summed up the points of each dog ( Figure 2C, Table 2).
Data analysis
To extract information concerning the interactions between group members, we used a directional correlation analysis [26] with a time window to quantify the fast, joint direction changes of pairs. Highly correlated direction changes of pairs are usually found only when two dogs interact by running a part of a loop together. The timescale of the owner's direction changes was much larger than that of the dogs, and -due to the short time window and the typically small time delays -it was not covered in the calculations. Therefore interactions between the owner and the dogs were not detected with this method. However, we know that the owner was walking on a predetermined route, and clearly led the whole group on a longer time scale (Figure 1, Figure S5 and Video S1).
We calculated directional correlation values for all short trajectory segments that were in a 6 s time window (t win ; in other details the method was identical to [26]), thus isolating short-term effects. We used t win = 6 s in the study, but the exact choice for the time window size has no substantial effect on the results ( Figure S8). A local interaction event was defined to exist when corresponding trajectory segments had a higher correlation value than C min = 0.95 ( Figure S7).
To extract leading tendency differences between members of pairs, the temporal directional correlation delay times (t ij ) were determined with the maximal correlation value. Positive t ij values correspond to leading events when dog i leads dog j, as the direction of motion of i is 'copied' by j delayed in time. For each pair, leading-following events corresponding to different t ij time delays were summed for each case in a walk, and for all 14 walks measured. For a detailed description of the applied method and a histogram of the found time delays between dog i and dog j, see Figure 2A and Figure S8.
If a clear maximum of the time delay histogram exists, it indicates frequent interaction between a dog pair at and near a well-defined time delay (see detailed description in Text S1 and Figures S8, S9). In many cases it can be seen from the histograms of those dog pairs where interaction was found (Figure 2A shows a typical example) that the leading and following roles (i.e. the sign of the time delay) are dynamically changing during a walk and also between walks. Significant deviation from zero in the location of the maximum value indicates that the dogs in the current pair have different leading propensities, suggesting a directed leaderfollower interaction. The full width at half maximum of the histogram (see Text S1) characterises how stable the leaderfollower relationship between a pair is.
We constructed an interaction network based on the detected interactions and leading tendency differences ( Figure 2B, see also Figure S10). An edge (or link) indicates detected interaction between a dog pair. In those pairs where there is a significant difference in leading tendency we defined a directed edge (pointing from the dog who was found to lead more frequently to the one who more often assumes the role of follower).
The result of the method using the directed edges of the leadership network to characterise active connections was confirmed in an independent way. From the positional data we determined whether members of a pair spend more time in the close vicinity of each other compared to a randomized case (for more details see Text S1). This vicinity method does not require synchronised movement from interacting pairs. The resulting ''social'' network of the directional correlation and the vicinity method are in high correlation (two-tailed Pearson correlation, r = 0.600, n = 15 (number of possible pairs), p = 0.018). . We used t win = 6 s in the study, but the overall shape of the histogram remains unchanged, therefore the exact choice of 6 s for the time window size has no substantial effect on the results. The green histograms show the probability density functions of the bootstrapped sample histogram maxima, with the corresponding vertical axis on the right. The panels are arranged in ascending order of the S. D. of the bootstrapped maxima. This value was used to distinguish between the existence or absence of a significant peak. (A-H) Pairs where significant leader-follower relationships were found are shown with blue. The black dashed curves indicate Gaussian distributions fitted to the [21 s; 1 s] range around the maximum of the given histogram, for the 6 s long time window. These Gaussian distributions were used to estimate the ratio of leading for each pair. (I-O) Those pairs where no significant connections were found in the absence of a significant peak are shown with red. See details of the decision criteria in Figure S9, and for the effect of this choice on the leadership network, consult Figure S10. (TIF) Figure S9 Randomisation method for deciding when a histogram does or doesn't have a peak. The black curve shows the cumulative distribution of the S.D. of bootstrapped maxima, for 4000 randomised histograms. We gained the randomised histograms by summing up the directional correlation delay time histograms of randomly selected pairs for each walk. The graph also shows the measured S.D. of the bootstrapped histogram maxima for every pair. Pairs where we detected significant leader-follower relationships are indicated with blue colour, otherwise red colour was used. (TIF) Figure S10 The effect of the cut-off value for considering histograms to have a significant peak on the leadership network. On the top, the maximal value of the S.D. of the bootstrapped maxima for accepting an interaction is shown. Lower or higher limits result in less or more edges in the network, respectively. However, the overall hierarchy remains the same. The numbers next to each node indicate the number of individuals which can be reached via directed links. This value was used as a measure of the leadership rank. The leadership network shown for lower (A-B) and higher (D-E) thresholds than the limit chosen (C) for use in the main text ( Figure 2) and in all further analysis. At the bottom, for each network the Pearson correlation coefficient and the corresponding p-value is shown for the correlation between the leadership ranks, and the dominance ranks (based on [29], for the Vizslas (n = 5). In all cases the correlation is significant. (TIF) Figure S11 Pearson correlation values between all variables extracted from the trajectory data, and the personality traits of the dogs (measured by questionnaires). Cells that contain correlations with p,0.05 are in bold. Correlation values are colour-coded according to the corresponding p-values for positive correlation (blue: p,0.01; cyan: 0.01,p, 0.05) and for negative correlation (green: p,0.05). The p-values are shown on Figure S12. An ''x'' indicates cells where correlation calculation is not applicable. Note that the correlations were determined using a small sample size of Vizslas (n = 5), therefore none of the p-values remain significant when correcting against multiple comparisons (Bonferroni, Sidak or Benjamini-Hochberg procedure), because of the large number of possible pairings (n = 300). Table S1 The variables characterising the interactions between pairs of dogs detected via the time-windowed directional correlation function method and the bootstrap method. Note that where t* is positive, the dog in the first column leads more often than the dog in the second column, and vice versa. (DOC)
Supporting Information
Text S1 Supplementary details of the analysis, additional results, and justifications. The supplementary text contains technical details of the data filtering and processing, justification of the variables by showing their uniqueness and consistency, justification of the correlations by an additional permutation test, and justification of all chosen parameters by showing that they have no effect on the final results.
(DOC)
Video S1 Animation showing a 3 minute long part of a walk by the owner (black triangle) and her dogs (coloured circles), recorded with GPS devices. In the bottom right corner the real time is shown, the video is played at 5 times the real speed. The inset in the top right corner illustrates the total path of the owner during the walk which started at the origin. The small rectangle shows the area presented on the main plot. On the main plot, for each individual the thick, normal and thin lines show the trajectories of the last 2 s, 5 s and 20 s, respectively. The momentary leader-follower relationships found by the time windowed directional correlation delay method are shown with the kite-shaped highlighting: between the smaller equal-length sides (close to the right angle vertex) is the leader, while the acute angle vertex points towards the follower. (AVI)
|
2016-01-07T01:57:53.067Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "182eb5fafff2c58530b5543927e77c6eefc095cd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1003446&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "182eb5fafff2c58530b5543927e77c6eefc095cd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
210072992
|
pes2o/s2orc
|
v3-fos-license
|
Glycyrrhizic Acid-Induced Differentiation Repressed Stemness in Hepatocellular Carcinoma by Targeting c-Jun N-Terminal Kinase 1
Hepatocellular carcinoma (HCC) is one of the most common malignant cancers with poor prognosis and high incidence. Cancer stem cells play a vital role in tumor initiation and malignancy. The degree of differentiation of HCC is closely related to its stemness. Glycyrrhizic acid (GA) plays a critical role in inhibiting the degree of malignancy of HCC. At present, the effect of GA on the differentiation and stemness of HCC has not been reported, and its pharmacological mechanism remains to be elucidated. This study evaluated the effect of GA on the stemness of HCC and investigated its targets through proteomics and chemical biology. Results showed that GA can repress stemness and induce differentiation in HCC in vitro. GEO analysis revealed that cell differentiation and stem cell pluripotency were up-regulated and down-regulated after GA administration, respectively. Virtual screening was used to predict the c-Jun N-terminal kinase 1 (JNK1) as a direct target of GA. Moreover, chemical biology was used to verify the interaction of JNK1 and GA. Experimental data further indicated that JNK1 inhibits stemness and induces differentiation of HCC. GA exerts its function by targeting JNK1. Clinical data analysis from The Cancer Genome Atlas also revealed that JNK1 can aggravate the degree of malignancy of HCC. The results indicated that, by targeting JNK1, GA can inhibit tumor growth through inducing differentiation and repressing stemness. Furthermore, GA enhanced the anti-tumor effects of sorafenib in HCC treatment. These results broadened our insight into the pharmacological mechanism of GA and the importance of JNK1 as a therapeutic target for HCC treatment.
INTRODUCTION
Hepatocellular carcinoma (HCC) is one of the most common malignant cancers worldwide (1). Cancer stem cells (CSCs) play a vital role in tumor progression because of their self-renewal and infinite proliferation properties (2). In experimental models, the existence of CSCs is one of the major factors causing resistance to conventional radiotherapy and chemotherapy (3)(4)(5). HCC exhibits poor differentiation and unlimited proliferative capacity. Mutations occurring in well-differentiated cells can lead to increase numbers of self-renewing cells (6,7). HCC dedifferentiation contributes to malignant progression, which is characterized by a significant change of morphology and loss of hepatic function (8). The induction of HCC differentiation is regarded as a prospective strategy for HCC treatment (9)(10)(11). Numerous studies have elucidated that poor differentiation can lead to high recurrence rates (12,13). Differentiation markers, such as the fetal liver marker AFP, is highly expressed in poorly differentiated HCCs, whereas the hepatocyte lineage differentiation marker HEPPAR1 presents low expression (13)(14)(15). Evaluating the differentiation degree of HCC is vital to the development of therapeutic strategies. These findings suggest that differentiation therapy may be an effective method to treat HCC.
Glycyrrhizic acid (GA) is the main active ingredient in Glycyrrhiza uralensis Fisch, a commonly used traditional Chinese medicine (TCM). GA is widely used as a therapeutic agent to cure chronic liver diseases. In addition, it can exert an anti-tumor effect by repressing angiogenesis (16) and inhibiting metastasis (17). Thus, far, the effect of GA on the differentiation and stemness of HCC has not been reported, and its pharmacological mechanism remains to be elucidated.
c-Jun N-terminal kinase 1 (JNK1), a member of the JNK family, plays a vital role in malignant transformation. JNK1 is involved in regulating CSC, and high JNK1 activation is closely associated with poor prognosis in HCC patients (18,19). Blocking the JNK1 signaling cascade with its inhibitor SP600125 can reduce the CSC population and enhance differentiation in glioma (20). A previous study has demonstrated that the JNK1 pathway can sufficiently block the differentiation of leukemia cells (21). Another prior study revealed that differentiation is obviously down-regulated in HCC samples with high JNK1 activation (22). Therefore, strategies to inhibit the levels and activities of JNK1 may be effective for HCC prevention and therapy.
However, the potential target of GA and its mechanisms remain unclear. This study was the first to evaluate the effect of GA on the differentiation and stemness in HCC. We clarified that GA can inhibit tumor growth by inducing differentiation and repressing stemness. Moreover, JNK1 was found to be the direct target of GA. JNK1-knockdown-medicated differentiation likewise dramatically inhibits stemness. In conclusion, GAinduced differentiation represses stemness in HCC by targeting JNK1. The results of our study may be used to develop more efficient guidelines to treat HCC.
Cell Culture
The HCC cell lines (HepG2 and PLC/PRF/5) were purchased from KeyGen Biotech (Nanjing, China) and cultured in Dulbecco's modified Eagle's medium and RPMI 1640 medium, respectively. The complete medium was supplemented with 10% fetal bovine serum and 1% penicillin-streptomycin. The cells were maintained at 37 • C in a humidified atmosphere including 5% CO 2 .
Cell Viability Assay
Cell viability was assessed by using the MTT method. Cells were seeded in a 96-well plate at a density of 1 × 10 4 cells/well. Different concentrations of GA (0, 1, 2, 3, 4, and 5 mM, Meilunbao, Dalian, China) were added after 24 h. After 48 h of continuous exposure to GA, 10 µL of MTT (5 mg/mL) was added and incubated for another 4 h at 37 • C. Afterwards, 100 µL of dimethyl sulfoxide (DMSO) was added to dissolve formazan crystals. Cell viability was determined by measuring the optical density at 570 nm with a microplate reader (Multiskan TM FC, Thermo Scientific, Waltham, MA, USA). The experiments above were performed in triplicate.
Clone Formation Assay
The cells were seeded in 6-well plate at a density of 400 cells per well. For pharmacodynamic test, the cells were continuously maintained in different concentrations of GA (0, 0.5, 1, and 2 mM). For target validation experiment, PLC/PRF/5 cells transfected with siNC or siJNK1 were continuously exposed to 0 or 2 mM GA. Cells were incubated for ∼10 days to form sizeable colonies. The colonies were fixed with 4% paraformaldehyde for 20 min at room temperature and then stained with 0.1% crystal violet to visualize the colonies. The experiments above were performed in triplicate. Photographs were taken by a microscope (Nikon, Japan).
Sphere Formation Assay
The cells were collected and rinsed to remove serum and then dissociated to a single-cell suspension in 3D Tumor Sphere Medium XF (Promocell, Sickingenstr, Heidelberg, Germany). For pharmacodynamic test, cells were continuously incubated with medium containing different concentrations of GA (0, 0.5, 1, and 2 mM). For target validation, PLC/PRF/5 cells transfected with siNC or siJNK1 were continuously exposed to medium containing 0 or 2 mM GA. Cells were subsequently cultured in an ultra-low attachment 24-well plate at a density of 1,000 cells per well for ∼10 days. Images were photographed using a microscope (Nikon, Japan). The experiments above were performed in triplicate.
Bioinformatics Analysis
Data related to HepG2 treated with GA were downloaded from GEO databases. The GEO series accession number is GSE67504. Significantly different expression was identified as upregulated or down-regulated according to the following standard: ANOVA P < 0.05, |Fold change| > 1.5. Hierarchical clustering was generated by the R package (pheatmap). The functions of down-regulated and up-regulated genes were analyzed with Metascape and visualized in Cytoscape. Data related to JNK1 in HCC were acquired from The Cancer Genome Atlas (TCGA). Patient samples were classified into either JNK1-high or JNK1low group. Gene Set Enrichment Analysis (GSEA) analysis was performed on the basis of JNK1 mRNA expression (23,24).
Synthesis of GA Probe
Synthesis of GA-yne was performed using the purchased GA as the raw material. The terminal alkyne-containing GAyne probe was synthetized by linking GA to 2-(3-but-3-ynyl-3H-diazirin-3-yl)-ethanol. The fluorescent group rhodamine-N3 was synthesized in accordance with previously published procedures (25).
Molecular Docking
Molecular docking was performed using Sybyl X1.1 software. The crystal structure of JNK1 was downloaded from the PDB database (PDB code, 2XS0). The 3-D structure of the GA was formed with LigPrep. Docking score was used to screen out the potential target of GA amongst multiple proteins.
Immunofluorescence Assay
GA-yne was added when HepG2 cells seeded in dishes had grown to 70% confluence. After 5 h, UV irradiation (350 nm) was performed for 30 min. The cells were fixed with 4% formaldehyde and then blocked with 5% FBS, including 0.1% TritonX-100. Afterwards, a solution (1 mM/L CuSO 4 , 1 mM/L TCEP, 100 µM/L rhodamine-N 3 , and 100 µM/L TBTA dissolved in PBS) was added to generate click chemistry reaction. Samples were incubated overnight with primary antibody JNK1 (1:100; Abcam) at 4 • C and then with secondary antibodies combined with Alexa Fluor 488 (Invitrogen, Waltham, MA, USA) for 1 h at room temperature. The images were obtained with a confocal microscope (Nikon, Japan).
Super-Resolution Microscopy
HepG2 cells were seeded in 35 mm dishes (World Precision Instruments, USA) and then grown to 60% confluency. Afterwards, a GA probe was added and incubated for 4 h followed by UV irradiation (350 nm) for 30 min. The GA probe was coupled with 647-conjugated azide (Thermo Fisher, USA) by click chemistry reaction after cells were fixed and blocked. Subsequently, cells were incubated overnight with primary antibody JNK1 (1:100; Abcam) at 4 • C and then with Cy3Bconjugated goat anti-rabbit secondary antibodies for 1 h at room temperature. Images were captured with a Nikon stochastic optical reconstruction microscope (N-STORM, Nikon, Japan).
Biacore Assay
Biacore assay was carried out using a Biacore 3000 instrument (GE Healthcare, Piscataway, NJ, USA). JNK1 was coupled to CM5 sensor chips activated by 50 mM NHS and 200 mM EDC (at a ratio of 1:1). Afterwards, GA was diluted in a buffer and then injected into JNK1-immobilized CM5 sensor chips at concentrations of 3.125, 6.25, 15.625, 31.25, and 62.5 µM. All signals were adjusted by a reference channel. Results were analyzed by using the BIA evaluation software.
RNA Interference
All siRNAs were transfected using Lipofectamine RNAi MAX following the standard protocol. The PLC/PRF/5 and HepG2 cells were collected after 72 h of the experiments. Negative control siRNA sequence:
Tumor Xenograft
Four-to five-week-old female BALB/c nu/nu mice were raised in specific pathogen-free (SPF) conditions at Tianjin International Joint Academy of Biomedicine. For target validation experiment, PLC/PRF/5 cells stably transfected with shNC or shJNK1 were injected subcutaneously into nude mice (2 × 10 6 cells in 100 µL PBS), which were then randomly divided into four groups (n = 4). When the tumor volume reached ∼50 mm 3 , the mice were treated by gavage with 100 mg/kg GA daily or with saline as control. For combination experiment, PLC/PRF/5 cells were injected subcutaneously into nude mice (2 × 10 6 cells in 100 µL PBS), which were then randomly divided into four groups (n = 4). The mice in the experiment groups were treated by gavage with GA (100 mg/kg daily), sorafenib (10 mg/kg daily) or a combination of GA (100 mg/kg daily), and sorafenib (10 mg/kg daily) when the tumor volume reached ∼50 mm 3 . Meanwhile, the mice in the control group were treated with the same volume of saline. Body weight and tumor diameter were measured every 3 days. Tumor volumes were evaluated using the following formula: V = length × width 2 /2 (26). Finally, all mice were euthanized simultaneously and the tumors were subjected to immunohistochemistry (IHC) staining. All animal experiments were performed under the approved protocols of the Institutional Animal Care and Use Committee.
Statistical Analysis
Statistical analysis was conducted using GraphPad software (version 7, GraphPad Software, Inc., La Jolla, CA, USA). Data were presented as means ± SD. One-way ANOVA was used to compare the multiple groups of data. Survival curve was analyzed using Kaplan-Meier method with logrank (Mantel-Cox test). P < 0.05 was considered statistically significant.
GA Reduces Stemness and Induces the Differentiation of Hepatic Cancer Cells
We first performed MTT on HepG2 and PLC/PRF/5 to detect the effect of GA on cell viability. The cells were then incubated with various concentrations of GA (0, 1, 2, 3, 4, and 5 mM) for 48 h. The half-maximal inhibitory concentrations (IC 50 ) in HepG2 and PLC/PRF/5 cells were 4.045 and 4.075 mM, respectively ( Figure 1A). Afterwards, the effects of GA on proliferation were evaluated via colony formation assay. The results showed that GA can inhibit proliferation in a concentration-dependent manner (Figure 1B; one-way ANOVA, * P < 0.05, * * P < 0.01, * * * P < 0.001). The effects of GA on stemness were then evaluated via sphere formation assay. Results showed that GA can reduce the number and size of the spheres in a dose-dependent manner compared with those of the control group ( Figure 1C; oneway ANOVA, * P < 0.05, * * P < 0.01, * * * P < 0.001). The protein levels of stem cell markers, such as SOX2 and OCT4, dramatically decreased in a dose-dependent manner after GA incubation compared with the unexposed group (Figure 1D; one-way ANOVA; * P < 0.05, * * P < 0.01, * * * P < 0.001). To explore whether GA can induce the differentiation of HCC, we observed the HepG2 and PLC/PRF/5 cell phenotype and found that GA can lead to marked morphological changes compared with the control (Figure 1E). In addition, the expression of AFP was significantly decreased, whereas that of HEPPAR1 was dramatically increased in a dose-dependent manner (Figure 1F; one-way ANOVA; * P < 0.05, * * P < 0.01). Taken together, these data indicated that GA can reduce stemness and induce differentiation in HepG2 and PLC/PRF/5 cells.
Multiple Functions in HCC Are Affected After GA Treatment
Metascape was used to confirm the functional enrichment of multiple genes (27). To investigate the functions affected after GA treatment, we downloaded and analyzed data (HepG2 treated with GA) from the GEO database (GSE67504). The heat map generated from differential genes is shown in Figure 2A. Down-regulated and up-regulated genes are marked in blue and red, respectively. The function of the differential protein was analyzed using the Metascape database and presented in Cytoscape (Figure 2B). Gene ontology (GO) and KEGG analysis of the up-regulated and down-regulated genes revealed that GA can promote cell maturation, inhibit proliferation, reduce the pluripotency of stem cells and decrease EGFR tyrosine kinase inhibitor resistance (Figures 2C,D). These results demonstrated that GA can result in terminal maturation and loss of self-renewal. The results also implied that GA can suppress the proliferation and stemness of HCC cells while inducing their differentiation.
Synthesis and Target Validation of GA Probe
The GA probe (GA-yne) was synthesized to confirm the GA target. The synthesis of the GA probe is shown in Figure 3A.
The GA probe (GA-yne) was composed of a core unit and a linker unit (28,29). To screen out the potential target of GA, virtual screening was performed and JNK1 was chosen as the GA target according to the docking score ( Figure 3B). First, we performed an immunofluorescence assay in HepG2 and then observed the co-localization of the GA probe and JNK1 under confocal microscopy. The result clearly demonstrated that the GA probe (red) and immunofluorescence of JNK1 (green) colocated, with a Pearson's correlation of 0.960102 ( Figure 3C). Furthermore, N-STORM was used to observe the co-localization of the GA probe and JNK1. Figure 3D illustrates that the GA probe and JNK1 are well co-located, with a Pearson's correlation of 0.918397. A Biacore experiment was carried out to further confirm the interaction of GA and JNK1. The dissociation constant (KD) value was 9.68e-6 (M) (Figure 3E). Finally, the molecular dynamics simulation visualized the combination of GA with JNK1, and the binding sites between GA and JNK1 were ASN-114, ASP-112, GLN-117, and GLU-154 ( Figure 3F). These results above demonstrated that GA directly targets JNK1 to block the downstream pathway.
GA Reduces Stemness and Induces Differentiation by Targeting JNK1
To investigate the mechanism of GA, we knocked down JNK1 and performed Western blot assay. The results clearly showed that JNK1 expression was down-regulated in HepG2 and PLC/PRF/5 cells after being transfected with the siJNK1 plasmid compared with siNC ( Figure 4A). JNK1 knockdown and addition of GA alone obviously inhibited colony formation. When JNK1 was knocked down, GA showed no significant inhibitory effect on colony formation compared with DMSO ( Figure 4B; one-way ANOVA, * * * P < 0.001). Meanwhile, either adding GA alone or knocking down JNK1 dramatically reduced sphere formation and expression of CSC markers. Nevertheless, no remarkable difference in the siJNK1 group was observed with or without GA treatment (Figures 4C,D; one-way ANOVA, Similarly, knocking down JNK1 or adding GA alone could reverse the poor differentiation of HCC to welldifferentiation, and the expression of differentiation markers exerted corresponding changes. However, when JNK1 was knocked down, the degree of differentiation showed no difference in the GA group compared with that in the DMSO group (Figures 4E,F; one-way ANOVA, * P < 0.05, * * * P < 0.001). In addition, GA blocked the effect on stemness and differentiation induced by JNK1 over-expression (Figure S1). These results further demonstrated that GA represses stemness and induces differentiation in HCC by targeting JNK1.
JNK1 Can Aggravate the Degree of Malignancy of HCC
Clinical data analysis was conducted to investigate the role of JNK1 in HCC. A representative image of IHC downloaded from The Human Protein Atlas database and a statistical analysis illustrated that JNK1 expression is higher in tumors than in normal tissues (Figure 5A; one-way ANOVA, * * P < 0.01). The UALCAN database was used to analyze the JNK1 expression. Similarly, JNK1 expression was found to be significantly higher in tumors than in normal tissues (Figure 5B; one-way ANOVA, * * P < 0.01). JNK1 expression was positively correlated with the clinical stage and AFP level in TCGA (Figures 5C,D; one-way ANOVA, * P < 0.05, * * * P < 0.001). Subsequently, we performed survival analysis, and the results indicated that high JNK1 expression predicts poor prognosis (Figure 5E). GO and KEGG enrichment analysis likewise implied that JNK1 can accelerate proliferation ( Figure 5F). GO enrichment consisted of biological process (BP), cellular component (CC), and molecular function (MF). As shown in Figure 5F, DNA replication in BP, condensed chromosome in CC and helicase activity and DNA-dependent ATPase activity in MF were closely associated with proliferation. In KEGG enrichment, the cell cycle was closely correlated to proliferation. Functions related to proliferation are highlighted in red boxes. Taken together, these data revealed that JNK1 can aggravate the degree of malignancy of HCC.
GA Suppresses Tumor Growth and Enhances the Anti-tumor Effect of Sorafenib
To assess the anti-tumor effect of GA, we subcutaneously injected BALB/c nu/nu mice with PLC/PRF/5 transfected with shNC or shJNK1. Tumors images, tumor weight ( Figure S2) and tumor volumes indicated that the degree of tumor malignancy was significantly decreased in the shJNK1 group or when 100 mg/kg GA alone was administered compared with the shNC group. Nevertheless, JNK1 knockdown cells exhibited no response to GA (Figures 6A,B; one-way ANOVA, * * * P < 0.001). These results demonstrated that the anti-tumor effect of GA is JNK1 dependent. Moreover, GA exhibited no influence on the body weight of shNC or shJNK1 group compared with the control (Figure 6C). The IHC analysis of SOX2 and OCT4 also indicated the stemness is inhibited after GA treatment or JNK1 knockdown. Meanwhile, AFP and HEPPAR1 expression dramatically decreased and increased, respectively, after GA was administered or when JNK1 was knocked down, transforming poorly differentiated HCC to well-differentiated HCC. Nevertheless, no remarkable difference in stemness and differentiation was observed in the shJNK1 group regardless of GA addition (Figures 6D,E; one-way ANOVA, * * * P < 0.001). These results indicated that GA reduces stemness and induces differentiation by targeting JNK1 in vivo.
Sorafenib, is a multikinase inhibitor that has shown efficacy against a wide variety of tumors in preclinical models (30). It can exert antitumor effect through blocking cell proliferation and angiogenesis. Considering the inhibitory effect of GA on the EGFR tyrosine kinase inhibitor resistance of HCC, we hypothesized that combining GA with sorafenib can enhance the antitumor effect of sorafenib. Combination effects were then evaluated in the PLC/PRF/5-bearing BALB/c nu/nu mice. Tumor volumes were significantly smaller in the GA-or sorafenib-treated groups than in the control group. Moreover, GA enhanced the antitumor effect of sorafenib on tumor growth compared with sorafenib alone (Figures 6F,G; one-way ANOVA, * * * P < 0.001). Overall, GA could suppress tumor growth and sensitize HCC cells for sorafenib. Moreover, GA exhibited no influence on the body weight of GA, sorafenib or a combination of GA and sorafenib group compared with the control (Figure 6H).
DISCUSSION
Our study elucidated that GA can reduce stemness and induce differentiation by targeting JNK1 in vitro and in vivo.
Malignant HCC cells undergo unlimited proliferation and feature dedifferentiation (31). Dedifferentiation is a virtual event during tumorigenesis, which features morphological changes. Differentiation induced by the effective ingredient of TCM is a new trend in the development of differentiation therapy to treat tumors. Drugs can induce differentiation in many tumor cells, suggesting their clinically importance (32,33). Differentiation therapy is a potential method to cure tumors (9)(10)(11). GO analysis indicated that the regulation of cell differentiation is dramatically up-regulated after GA treatment. The major strength of our study is that it is the first to report the capability of GA to induce HCC differentiation.
GA can exert an anti-tumor effect by reducing proliferation (34,35). Consistent with a previous study, GO analysis revealed that regulation of the cell cycle process is dramatically downregulated after GA treatment. Importantly, no studies have reported the effect of GA on the pluripotency of stem cells in HCC. On the basis of the down-regulated GO, the result indicated that the pluripotency of stem cells is dramatically inhibited after GA treatment. Sorafenib, a multi-kinase inhibitor with efficacy against HCC in clinical application, showed drug resistance upon long-term administration. Hagiwara et al. reported that activated JNK and high CD133 expression contributed to poor response to sorafenib in HCC (36, 37). Interestingly, our study indicated that combination with GA can effectively enhance the anti-tumor effect of sorafenib. This study provides new theoretical guidance to improve the anti-tumor effects of sorafenib by its combination with natural drugs.
High JNK1 expression is closely associated with poor prognosis and increases the degree of malignancy of HCC (18,19,38). Similarly, in our study, the data analysis based on TCGA illustrated that JNK1 can aggravate the degree of malignancy of HCC. Activated JNK1 is positively correlated with the maintenance of stemness in gliomas, indicating the potential of JNK1 as a target for eliminating stemness. Furthermore, JNK1 blockage inhibits self-renewal and induces differentiation in gliomas (20). Our finding about the relationship of stemness and differentiation are well-consistent with those reported by Chen et al. (39).
In summary, our data showed for the first time that GA reduces the properties of CSCs and induces the differentiation of HCC via JNK1. Our study suggested that GA in combination with sorafenib may be an effective strategy for HCC therapy in the future.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: https://www.ncbi.nlm.nih.gov/geo/query/acc. cgi?acc=GSE67504.
|
2020-01-09T14:13:09.882Z
|
2020-01-09T00:00:00.000
|
{
"year": 2019,
"sha1": "a5e0e10b330325fc30fddca68faa50f4700b7064",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2019.01431/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5e0e10b330325fc30fddca68faa50f4700b7064",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
260016586
|
pes2o/s2orc
|
v3-fos-license
|
Microfluidic-Assisted Synthesis of Metal—Organic Framework —Alginate Micro-Particles for Sustained Drug Delivery
Drug delivery systems (DDS) are continuously being explored since humans are facing more numerous complicated diseases than ever before. These systems can preserve the drug’s functionality and improve its efficacy until the drug is delivered to a specific site within the body. One of the least used materials for this purpose are metal—organic frameworks (MOFs). MOFs possess many properties, including their high surface area and the possibility for the addition of functional surface moieties, that make them ideal drug delivery vehicles. Such properties can be further improved by combining different materials (such as metals or ligands) and utilizing various synthesis techniques. In this work, the microfluidic technique is used to synthesize Zeolitic Imidazole Framework-67 (ZIF-67) containing cobalt ions as well as its bimetallic variant with cobalt and zinc as ZnZIF-67 to be subsequently loaded with diclofenac sodium and incorporated into sodium alginate beads for sustained drug delivery. This study shows the utilization of a microfluidic approach to synthesize MOF variants. Furthermore, these MOFs were incorporated into a biopolymer (sodium alginate) to produce a reliable DDS which can perform sustained drug releases for up to 6 days (for 90% of the full amount released), whereas MOFs without the biopolymer showed sudden release within the first day.
Introduction
The administration and delivery of therapeutic and diagnostics agents to an afflicted site in the body is one of the major challenges in the treatment of various diseases. Conventional methods of drug administration involve various direct administration strategies, such as oral, ocular, transdermal, and intravenous methods of delivering drugs without any provision of a "carrier". These methods are sometimes ineffective as they suffer various limitations, such as loss of drug function and efficacy, decreased selectivity and transport to the target site, and undesirable effects on other untargeted tissues. Drug delivery systems (DDS) can overcome all these limitations. Drug delivery systems (drug carriers) maintain the physiochemical properties of the drug under the biological environment of the body and can provide a controlled sustained release of drug molecules over a prolonged time [1]. MCF-7 breast cancer cells were acquired from the Department of Biotechnology, JAIN (Deemed-to-be-University), Karnataka, India.
Microfluidic Device Fabrication
The PDMS microfluidic synthesis chip (as shown in Figure 1) was fabricated using the standard photolithographic technique with a channel width and depth of 150 and 100 µm, respectively, with the design based on a microfluidic device for fluoride detection previously reported by our group [28]. There are two inlets and one outlet with 1 mm polytetrafluoroethylene (PTFE) tubings inserted into them. The tubings are connected to their respective syringes (or let into a glass beaker for the outlet tubing). The 'S'shaped part of the chip promotes the vigorous mixing of reactants; the cylindrical region (diameter = 2 mm) present after the mixing of the reactants promotes further mixing and might cause larger, heavier-sized particles (>1-2 µm) to settle down due to loss of fluid velocity and also promote the hydrodynamic focussing of homogenous particles towards the outlet.
Biosensors 2023, 13, x FOR PEER REVIEW 3 of 15 sodium was purchased as tablets from CIPLA, and the tablets were crushed into a powdered form. The MTT assay kit (EZcount MTT Cell Assay Kit) was purchased from HiMedia Laboratories Pvt. Ltd., Bangalore, Karnataka, India. Dulbecco's Modified Eagle Medium (DMEM), Dimethyl Sulfoxide (DMSO), and phosphate-buffered saline (PBS) buffer were purchased from Sigma Aldrich (India) Pvt. Ltd., Bangalore, Karnataka, India. MCF-7 breast cancer cells were acquired from the Department of Biotechnology, JAIN (Deemedto-be-University), Karnataka, India.
Microfluidic Device Fabrication
The PDMS microfluidic synthesis chip (as shown in Figure 1) was fabricated using the standard photolithographic technique with a channel width and depth of 150 and 100 µm, respectively, with the design based on a microfluidic device for fluoride detection previously reported by our group [28]. There are two inlets and one outlet with 1 mm polytetrafluoroethylene (PTFE) tubings inserted into them. The tubings are connected to their respective syringes (or let into a glass beaker for the outlet tubing). The 'S'-shaped part of the chip promotes the vigorous mixing of reactants; the cylindrical region (diameter = 2 mm) present after the mixing of the reactants promotes further mixing and might cause larger, heavier-sized particles (>1-2 µm) to settle down due to loss of fluid velocity and also promote the hydrodynamic focussing of homogenous particles towards the outlet.
Synthesis of ZIF-67 and Zn-ZIF-67
Conventional synthesis of ZIF-67 was performed by dissolving 897 mg of Co(NO3)2.6H2O and 1982 mg of 2-MIM in 30 mL of methanol each via sonication. Both solutions were mixed in a beaker and stirred continuously for 24 h. The precipitate was collected, washed with methanol, and dried in a hot air oven overnight. A Y-channel with a serpentine mixing region was used for the microfluidic synthesis of ZIF-67 and Zn-ZIF-
Synthesis of ZIF-67 and Zn-ZIF-67
Conventional synthesis of ZIF-67 was performed by dissolving 897 mg of Co(NO 3 ) 2 .6H 2 O and 1982 mg of 2-MIM in 30 mL of methanol each via sonication. Both solutions were mixed in a beaker and stirred continuously for 24 h. The precipitate was collected, washed with methanol, and dried in a hot air oven overnight. A Y-channel with a serpentine mixing region was used for the microfluidic synthesis of ZIF-67 and Zn-ZIF-67 (MZIF-67 and MZnZIF-67, respectively) ( Figure S1). For ZIF-67, 448 mg of Co(NO 3 ) 2 .6H 2 O and 991 mg of 2-MIM were dissolved in 30 mL of methanol. Co(NO 3 ) 2 .6H 2 O solution was passed through one inlet and 2-MIM solution through the other inlet with a flow rate of 80 µL/min. The product solution was collected from the outlet, washed with methanol, and dried overnight. For Zn-ZIF-67, Co(NO 3 ) 2 .6H 2 O salt and zinc nitrate salt were taken in the molar ratio of 1:0.4 (582 mg:238 mg); the salts were dissolved in methanol. Similar to ZIF-67 microfluidic synthesis, salt solution and linker solution were pumped at 80 µL/min (flow rate was chosen based on synthesis time and yields obtained), and the product was collected at the outlet (as shown in Figure 1). The product was washed multiple times with ethanol and dried overnight [16,18,29]. The concentrations of materials were chosen based on the assumption that there should be a minimum 'build-up' of material in the channel to prevent its failure, while also considering a reasonable yield of material produced. The flow rate was chosen after conducting a time versus chip failure (blockages, leakages) probability after performing five trials of flow analysis for 30 min at 40, 60, 80, and 100 µL/min. It was observed that a flow rate of 80 µL/min produced the lowest number of chip failures for a decent amount of material produced.
Drug Loading
Diclofenac sodium (DS) was chosen as the model drug. It shows a characteristic peak at around 278 nm under UV-visible spectrophotometry. Various drug concentrations, namely 40, 50, 60, 70, 80, 90, and 100 ppm, were prepared by mixing the drug in de-ionized (DI) water.
MOF-Alginate Bead Synthesis
The bead formation method involves the addition of 5 mg of MOF to 2.5 mL of 90 ppm drug solution followed by agitation for 2 h for adsorption. The resultant solution was mixed with 2.5 mL of 10% (by w.t.) sodium alginate (SA) solution to prepare a 5% SA solution. This solution was pumped at 100 µL/min to produce small-sized drops which were dropped into a linker solution containing 5% CaCl 2 (by w.t.) ( Figure S2). The beads were kept in the linker solution for 6 h for optimal linkage. The beads were then washed with water and dried in an oven overnight. The drying consequently reduced the size and weight of the beads to approximately 1 mm and 1 mg in diameter and weight, respectively, which theoretically should contain 25 µg of MOF per bead.
Characterization
All the materials (ZIF-67, MZIF-67, and MZnZIF-67) were analysed with various analytical techniques to understand their physical and chemical properties. The surface morphology of ZIF-67, MZIF-67, and MZnZIF-67 was determined using the field-emission scanning electron microscopic (FESEM) technique (HITACHI SU-70). Brunauer, Emmett, and Teller (BET) analysis was carried out to determine the multipoint surface area, and a microporous (MP) study was conducted to calculate the total pore volume. An adsorptiondesorption study was performed using N2 at liquid nitrogen temperature (−196 • C) on Belsorp-Max (M/s. Microtarc BEL, Osaka, Japan). During the sample analysis process, all the materials were subjected to a degassing process to expel the moisture content present at 100 • C for 2 h. To study the structural features, powder X-ray diffraction (XRD) patterns for the samples were recorded on an Ultima-IV X-ray diffractometer (M/s. Rigaku Corporation, Tokyo, Japan) with Ni-filtered Cu Kα radiation (1 = 1.5406 A • ) with a 2θ scan speed of 2 degrees/min and a scan range of 5 to 80 degrees at 40 KV and 30 Ma. Zetasizer Nano ZS-ZEN3600 was used to perform zeta analysis of the samples at 7 pH. X-ray photoelectron spectroscopy (XPS) was used to analyse the chemical states of the elements in the sample using VG multi-lab 2000, which was operated at 3.125 meV using an Al-Kα as the energy source. The MTT assay was carried out using a microplate reader (Molecular Devices SpectraMax ABS Plus, TCL Asset Group Inc. Concord, Canada).
Adsorption Studies
Adsorption studies were performed by changing the two main reaction parameters, such as concentration and time of contact. The concentration studies were performed by adding the respective MOF powders to solutions with different drug concentrations, namely 40, 50, 60, 70, 80, 90, and 100 ppm, followed by shaking for 2 h. After 2 h, the solutions were centrifuged at 3000 rpm to separate the MOF powders and the supernatant was collected. The time studies were performed by adding MOF powders to 90 ppm solutions which were left for adsorption for up to 10, 20, 30, 40, 50, and 60 min, after which the solutions were centrifuged at 3000 rpm to separate the MOF powders and the supernatant was collected. The supernatant was analysed using a UV-visible spectrometer. The drug loading percentage (C l was calculated using the following equation: where C i is the initial concentration and C e is the drug concentration after adsorption.
Drug Release Studies
Drug release studies were performed by the addition of MOF powders and beads to phosphate-buffered saline (PBS, pH-7.4) for a time period of 3, 24, 48, 72, 96, 120, or 144 h, after which the supernatant was collected and analysed using a UV-visible spectrometer. The drug release percentage (C r ) was calculated using the following equation: where C i is the initial concentration and C e is the drug concentration after release.
Cytotoxicity Studies
A 48 h MTT assay was conducted with MCF-7 breast cancer cells in a 96-well plate (200 µL). The materials were dispersed (partially dissolved) in a PBS-ethanol mixture with a volumetric ratio of 1:0.05 of PBS to ethanol to enhance the materials' solubility. A 100 µL solution of an equal population (10,000 cells) of MCF-7 cells in DMEM (with the addition of streptomycin to prevent bacterial growth) was filled in each well, and each plate (one for 24 h and the other for 48 h) was incubated for 24 h. After 24 h, each well was noted, and control populations along with 10 µL of solution with a known concentration of the materials were added to the wells (same volume of blank solution was added to the control wells). The endpoint masses of materials in the wells were 0.003 mg, 0.004 mg, 0.005 mg, 0.006 mg, and 0.007 mg for ZIF-67 and 0.0075 mg, 0.01 mg, 0.0125 mg, 0.015 mg, and 0.0175 mg for MZIF-67 and MZnZIF-67. The different concentrations were taken due to the low solubility/dispersibility of conventional ZIF-67 in PBS. After 24 h and 48 h, a 10 µL solution of MTT reagent (concentration of 5 mg/mL) was added to each well and left for 3 h for the cell-reagent interaction to take place [30]. After three hours, DMSO was added to the wells, the absorbance was measured using a microplate reader at 540 nm, and the OD values were noted. The assay could not be performed for the beads as even after the addition of streptomycin the solution in the wells became turbid and produced improper OD values.
Characterization
The FESEM images of the single crystals of the MOF variants can be seen with their respective size bars in Figure 2. The comparison indicates that the crystal sizes of the batch produced and the microfluidically synthesized MOF variants have similar dimensions with a total length of 500 nm (with edge lengths of approximately 250 nm). The images show the crystals to have a similar structure as well as a cubic symmetry, indicating the formation of ZIF-67. produced and the microfluidically synthesized MOF variants have similar dimensions with a total length of 500 nm (with edge lengths of approximately 250 nm). The images show the crystals to have a similar structure as well as a cubic symmetry, indicating the formation of ZIF-67. The XRD plot of the MOF variants is shown in Figure 4. It can be observed from the plot that the peaks of the synthesized materials match the simulated ZIF-67 XRD pattern. A comparison of the major peaks ((110), (211), (222)) of all three variants shows that the batch and microfluidically synthesized variants have peaks lying at the same 2θ values. All the major peaks are singular in nature, indicating the absence of bi-phasic or two different materials, especially in the case of MZnZIF-67. The zeta potential was determined to be −12.8 mV, −14.9 mV, and −14.1 mV for ZIF-67, MZIF-67, and MZnZIF-67, respectively. BET isotherms of the materials are plotted and shown in Figure 5. All materials show a Type-I isotherm, indicating that the materials are microporous in nature, owing to their very high surface area. The pore diameters of the materials have similar dimensions, but the overall surface area of batch-synthesized ZIF-67 is higher than that of microfluidically synthesized variants (shown in Table S1). The FTIR spectra of all materials are shown in Figure 5d, with the dotted lines marking the peaks. The peak at 431 cm −1 represents the presence of the Co-N bond (and Zn-N in the case of MZnZIF-67), whereas peaks at 760, 1140, and 1438 cm −1 indicate the stretching and bending modes of the imidazole ring. The stretching mode of C-H from the aromatic ring and the aliphatic chain in 2-MIM are also described by peaks at 2948 cm −1 and 2880 cm −1 . BET isotherms of the materials are plotted and shown in Figure 5. All materials show a Type-I isotherm, indicating that the materials are microporous in nature, owing to their very high surface area. The pore diameters of the materials have similar dimensions, but the overall surface area of batch-synthesized ZIF-67 is higher than that of microfluidically synthesized variants (shown in Table S1). The FTIR spectra of all materials are shown in Figure 5d, with the dotted lines marking the peaks. The peak at 431 cm −1 represents the presence of the Co-N bond (and Zn-N in the case of MZnZIF-67), whereas peaks at 760, 1140, and 1438 cm −1 indicate the stretching and bending modes of the imidazole ring. The stretching mode of C-H from the aromatic ring and the aliphatic chain in 2-MIM are also described by peaks at 2948 cm −1 and 2880 cm −1 .
X-ray photoelectron spectroscopy (XPS) is a surface analysis technique that describes the elements present as well as their oxidation states and binding information. As can be observed in Figure 6a and Table S2, X-ray photoelectron spectroscopy (XPS) is a surface analysis technique that describes the elements present as well as their oxidation states and binding information. As can be observed in Figure 6a and Table S2
Concentration and Time Studies
Concentration-based adsorption studies are generally performed to observe the maximum adsorption a material (in this case MOFs) can reach over a long period. It can be seen from Figure 7a that the MZIF-67 variant shows the highest drug adsorption across all concentrations of the drug, followed by MZnZIF-67 and ZIF-67. All three variants, MZIF-67, MZnZIF-67, and ZIF-67, reach their highest adsorption capacities of 78%, 63%,
Adsorption Studies Concentration and Time Studies
Concentration-based adsorption studies are generally performed to observe the maximum adsorption a material (in this case MOFs) can reach over a long period. It can be seen from Figure 7a that the MZIF-67 variant shows the highest drug adsorption across all concentrations of the drug, followed by MZnZIF-67 and ZIF-67. All three variants, MZIF-67, MZnZIF-67, and ZIF-67, reach their highest adsorption capacities of 78%, 63%, and 52%, respectively, at higher concentrations of 80 to 100 ppm (later, 100 ppm was chosen to perform time studies). Figure 6. The XPS (a) survey spectra of Alg_MZnZIF-67, (b) high-resolution spectra of O1s, (c) highresolution spectra of C1s, and (d) high-resolution spectra of Ca2p.
Concentration and Time Studies
Concentration-based adsorption studies are generally performed to observe the maximum adsorption a material (in this case MOFs) can reach over a long period. It can be seen from Figure 7a that the MZIF-67 variant shows the highest drug adsorption across all concentrations of the drug, followed by MZnZIF-67 and ZIF-67. All three variants, MZIF-67, MZnZIF-67, and ZIF-67, reach their highest adsorption capacities of 78%, 63%, and 52%, respectively, at higher concentrations of 80 to 100 ppm (later, 100 ppm was chosen to perform time studies).
Drug Release Studies
The release percentage of the drug in PBS over a period of 3 to 144 h (0 to 6 days) for all MOF and MOF in alginate variants is shown in Figure 8
Drug Release Studies
The release percentage of the drug in PBS over a period of 3 to 144 h (0 to 6 days) for all MOF and MOF in alginate variants is shown in Figure 8. It can be seen that the powder variants (ZIF-67, MZIF-67, and MZnZIF-67) release most of the adsorbed drug within the first 24 h (1 day), whereas the alginate variants (Alg ZIF-67, Alg MZIF-67, and Alg MZnZIF-67) release their drug content slowly over a period of 144 h (6 days). The drug release of different materials starts at different values because they adsorbed different amounts of the drug, as can be seen from Figure 7a.
Cytotoxicity Studies
The results of the MTT assays performed on the materials are shown in Figure 9. As can be seen in Figure 9c
Cytotoxicity Studies
The results of the MTT assays performed on the materials are shown in Figure 9. As can be seen in Figure 9c, the ZIF-67 variant has an erratic OD value trend with comparative cytotoxicity between 24 and 48 h showing a sudden increase from 24 to 48 h. On the other hand, it can be observed from Figure 9a
Characterization
The FESEM images (Figure 2) prove that the synthesis of similarly sized crystals of MOFs can be achieved by microfluidic means in a shorter time than by the batch synthesis techniques. The observed size distribution of crystals achieved by both techniques was also comparable, while when embedded in the alginate beads (Figure 3), the distribution of these crystals seems to be in small clusters, owing to the inability of the MOF to be evenly dispersed in the highly viscous alginate (in water) solution. These small clusters are dispersed evenly in the alginate structure, not affecting the overall distribution/dispersion of the MOF crystals in the alginate microstructure. The presence of matching sharper peaks seen in the XRD plots (Figure 4) indicates the formation of highly/purely crystalline expected materials, which in turn dictates that microfluidic processes can reproduce such materials with ease. The decrease in surface areas (surface parameters calculated using the instrument software), as seen from the BET plots ( Figure 5), might be due to changes in surface morphology/structure or the introduction of surface moieties due to microfluidic synthesis conditions and reduced synthesis time [33,34]. The high surface area and large pore volume may help in the loading of a large amount of drug molecules, but the same effect may be produced even at a lower surface area along with the presence of certain surface moieties or morphology. This might be supported by the fact that the zeta potential of the microfluidic variants is more negative than that of the bulk variant. The FTIR plots ( Figure 5) also confirm the presence of the expected functional groups and thus further confirm the formation of the expected products. The XPS spectra indicate the presence of elements and bonds expected in alginate but the absence of any
Characterization
The FESEM images (Figure 2) prove that the synthesis of similarly sized crystals of MOFs can be achieved by microfluidic means in a shorter time than by the batch synthesis techniques. The observed size distribution of crystals achieved by both techniques was also comparable, while when embedded in the alginate beads (Figure 3), the distribution of these crystals seems to be in small clusters, owing to the inability of the MOF to be evenly dispersed in the highly viscous alginate (in water) solution. These small clusters are dispersed evenly in the alginate structure, not affecting the overall distribution/dispersion of the MOF crystals in the alginate microstructure. The presence of matching sharper peaks seen in the XRD plots ( Figure 4) indicates the formation of highly/purely crystalline expected materials, which in turn dictates that microfluidic processes can reproduce such materials with ease. The decrease in surface areas (surface parameters calculated using the instrument software), as seen from the BET plots ( Figure 5), might be due to changes in surface morphology/structure or the introduction of surface moieties due to microfluidic synthesis conditions and reduced synthesis time [33,34]. The high surface area and large pore volume may help in the loading of a large amount of drug molecules, but the same effect may be produced even at a lower surface area along with the presence of certain surface moieties or morphology. This might be supported by the fact that the zeta potential of the microfluidic variants is more negative than that of the bulk variant. The FTIR plots ( Figure 5) also confirm the presence of the expected functional groups and thus further confirm the formation of the expected products. The XPS spectra indicate the presence of elements and bonds expected in alginate but the absence of any elements and bonds associated with the MOF embedded in the alginate matrix [35]. It can be gathered from the XPS plots that the metal salt (Ca 2+ present in Ca2p state) used as a linker is present in the matrix, whereas the metals used for the MOF (Co 2+ , Zn 2+ ) are not present, indicating that the MOF is properly embedded into a properly linked matrix. Similarly, Na 2+ is also not present, which indicates two facts, namely, (i) the alginate matrix has linked properly such that no sodium alginate molecules are present and (ii) there is no leakage of sodium diclofenac from the MOF onto the surface of the beads.
Adsorption Studies and Drug Release Studies
As seen from Figures 7 and 8 and Table S1, three possible reasons can be drawn as to why microfluidic ZIF variants show higher adsorption than the batch-synthesized variant: (a) the stability of the batch-synthesized ZIF-67 in water was observed to be lower than that of the microfluidic variants, which may lead to a decrease in adsorption over time; (b) the faster microfluidic synthesis might lead to the introduction of defects and vacant sites, which might lead to the exposure of functional groups which may increase the adsorption capacity; and (c) a smaller pore size (for microfluidic ZIF-67) suggests that there is a possibility of the drug molecule occupying pores much better than the bigger pores of the batch-synthesized ZIF-67 (as drug molecules may move back and forth from the pore, constantly). The oscillatory nature of the drug adsorption values may be due to the constant adsorption and desorption of the drug molecule from the MOF's surface due to the processes of diffusion and dissolution [36]. This is also supported by the trend seen in Figure 8, where these MOFs release the majority of the drug molecules almost instantly.
The drug release profiles (Figure 8) of the various materials indicate that the MOF powders (specifically ZIF-67 and MZIF-67) have an erratic, non-uniform release or sudden pre-release where the release of drug molecules may be caused by the breakdown of the structure in the PBS within a short period of time. The MOF-in-alginate beads have a slow overall uniform release over 6 days (and also have a slower release in starting compared to their MOF-only counterparts) due to the preservation of the MOF structure in the alginate matrix and the slow breakdown of alginate in physiological conditions. Among all the MOF-in-alginate variants, the Alg-MZnZIF-67 variant shows the best uniform release, which may be due to the stability of MZnZIF-67 in PBS being higher than that of other variants due to the incorporation of zinc along with cobalt in the MOF structure [29].
Cytotoxicity Studies
The ZIF-67 has a highly toxic effect on the cells even at very low concentrations compared to the microfluidically synthesized materials, and the erratic OD values may support its unstable nature in aqueous solutions as it might be rapidly degrading in the solution, thus making it unsuitable as a drug carrier. The assays of materials MZIF-67 and MZnZIF-67 show an expected (mostly gradual, non-erratic) trend in OD values, which may support that the materials are stable in aqueous medium for at least up to 48 h. Among these two materials, MZnZIF-67 shows a much more stable trend of increasing cytotoxic effect with an increase in material amount, with the MZIF-67 having lower OD values than those of MZnZIF-67 for the same corresponding amount of material.
Conclusions
It can be gathered from the aforementioned data that MOFs can be used for drug delivery purposes after appropriate modifications. The results show that the microfluidic method produced homogenous MOF particles similar to batch/bulk processes in a short amount of time, as well as improved the particles' adsorption properties. All three synthesized variants follow Freundlich isotherm, which indicates multi-layered adsorption, and follow the second-order kinetics as well. The microfluidically synthesized variants show a higher adsorption capacity as the synthesis may have added or left some positive functional groups on the surface of the MOF particles (which might have also led to a decrease in surface area; this requires further investigation) which is supported by the fact that many MOF structures adsorb organic molecules via electronic interactions and hydrogen bonding as well as π-π stacking, i.e., diclofenac molecules may interact with positive surface moieties to be adsorbed. Even though such materials are unsuitable for drug delivery purposes due to the presence of 'toxic' ions such as cobalt and being unstable under physiological conditions, they can be further improved by combining other metals such as zinc to greatly improve their stability and further reduce toxicity (as can be seen by the MTT assay studies) (as shown in Figure S3). As the studies were performed on cancer cells and showed some cytotoxic effect towards these cells, this also supports the possibility of the use of such toxicity in a favourable way, as if the particle can be inserted or directed towards an affected area, the release of toxic ions can help kill the invading/infected moiety (bacteria, fungus, etc.) as well as destroy cancerous tissue while delivering the relevant medication. Subsequently, the incorporation of such materials in a biopolymer (sodium alginate) matrix helps further protect the delivery agent (MOFs) (slowing down the framework's (MOF's) breakdown) and the drug, thus making them biocompatible, safe, and easy to transport. These beads slowly release the drug over a long time of up to 6 days. MOFs possess many physiochemical properties that make them ideal drug delivery agents, but their qualities can be further improved by the utilization of different synthesis techniques and combinations of different metals in the structure as well as the incorporation of such materials in a biopolymer, as shown in this work.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/bios13070737/s1, Figure S1. Setup for microfluidic MOF synthesis; Figure S2. Schematic of setup for MOF-in-alginate bead synthesis; Figure S3. Stability of microfluidically synthesized MOF variants in PBS over a period of 5 days; Table S1. BET isotherm parameters of different MOFs; Table S2. The binding energies and elemental composition from the XPS of Alg_MZnZIF-67.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
2023-07-22T15:43:40.096Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "6dadad78e52fefe95320473a27890c5a60c2d1a9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6374/13/7/737/pdf?version=1689577879",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d962dabb722f46ce63e9452c64e0526a6a5082c",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14495069
|
pes2o/s2orc
|
v3-fos-license
|
Donation after cardiocirculatory death: a call for a moratorium pending full public disclosure and fully informed consent
Many believe that the ethical problems of donation after cardiocirculatory death (DCD) have been "worked out" and that it is unclear why DCD should be resisted. In this paper we will argue that DCD donors may not yet be dead, and therefore that organ donation during DCD may violate the dead donor rule. We first present a description of the process of DCD and the standard ethical rationale for the practice. We then present our concerns with DCD, including the following: irreversibility of absent circulation has not occurred and the many attempts to claim it has have all failed; conflicts of interest at all steps in the DCD process, including the decision to withdraw life support before DCD, are simply unavoidable; potentially harmful premortem interventions to preserve organ utility are not justifiable, even with the help of the principle of double effect; claims that DCD conforms with the intent of the law and current accepted medical standards are misleading and inaccurate; and consensus statements by respected medical groups do not change these arguments due to their low quality including being plagued by conflict of interest. Moreover, some arguments in favor of DCD, while likely true, are "straw-man arguments," such as the great benefit of organ donation. The truth is that honesty and trustworthiness require that we face these problems instead of avoiding them. We believe that DCD is not ethically allowable because it abandons the dead donor rule, has unavoidable conflicts of interests, and implements premortem interventions which can hasten death. These important points have not been, but need to be fully disclosed to the public and incorporated into fully informed consent. These are tall orders, and require open public debate. Until this debate occurs, we call for a moratorium on the practice of DCD.
Introduction
There have been many "consensus" statements addressing the practice of donation after cardiocirculatory death (DCD). In general, these claim that DCD conforms to clear ethical principles, respects the dead donor rule, and is worthy of support [1][2][3][4][5]. Many believe that the ethical problems of DCD have been "worked out" and that it is unclear why DCD should be resisted. The dead donor rule is an "unwritten, uncodified standard that has guided organ procurement in the United States since the late 1960s" [6]. This rule claims that humans must be dead before vital organs can be taken, and is intended to prevent the following: patients being killed by organ retrieval, harm or exploitation of the weak/vulnerable, mistrust of doctors and transplantation, and treating a patient merely as a means to organs [6,7].
We argue for a moratorium on the practice of DCD until full public disclosure and fully informed consent is obtained from potential donors. Specifically, we will argue that DCD donors may not yet be dead, conflicts of interest in the decision to withdraw life support before DCD are unavoidable, potentially harmful premortem interventions during DCD cannot be justified even with the rule of double effect, consensus statements are of low quality and plagued by conflict of interest, and that claims that DCD conforms with the intent of the law and current accepted medical standards are misleading and inaccurate. Moreover, some arguments in favor of DCD, while likely true, are "straw-man arguments," such as the substantial benefit of organ donation.
I The Process of DCD
In general, the current practice of controlled DCD can be summarized as follows [1][2][3][4][5]. First, there is a decision based on the patient's wishes (either directly, or via a substitute decision maker) or best interests (via a guardian surrogate decision maker when the patient's wishes either are not known, or the patient was never competent) to discontinue life support therapy. This is typically made in the situation of severe brain, neuromuscular, or organ dysfunction when the burdens of continued life support are felt to outweigh the benefits of delaying death. Second, the patient/surrogate/guardian is offered the "opportunity" for organ donation after death if indeed death is pronounced by irreversible lack of circulation. Third, after consent, the patient is withdrawn from life support and death is awaited. If the circulation stops within 1-2 hours, the patient is a DCD donor. Fourth, death is declared after 2-10 minutes of absent circulation, the time varying between hospitals and countries, with the mechanism to determine absent circulation varying as well. Fifth, the surgical team, at the 2-10 minute mark, begins surgical harvest of the organs appropriate for donation. Often cannulas will have been inserted in the femoral vessels prior to life support withdrawal to facilitate rapid organ preservation at the 2-10 minute mark. Usually medications such as heparin and phentolamine will have been given to the patient prior to absent circulation to theoretically improve organ preservation. This orchestrated expected death is called "controlled DCD" to contrast with "uncontrolled DCD" which refers to donation after unexpected cardiac arrest with death pronounced after failed attempts at cardiopulmonary resuscitation.
II The Consensus Position
Protocols for DCD have attempted to clarify the justification for this practice with a series of self-evident truths [1][2][3][4][5]. First, the decision to withdraw life support is made before any mention of DCD, and is not influenced by the option of DCD. Indeed, the physicians who discuss withdrawal are not in any way involved in transplantation. Second, fully informed consent for DCD is freely obtained for organ donation after death and for any interventions done premortem. Third, after 2-10 minutes of absent circulation this state is permanent. There are no cases of circulation restarting on its own after this time period, and to try to restart circulation by cardiopulmonary resuscitation is unethical given the patient's wishes to have a do not resuscitate order. For these reasons, the absent circulation is irreversible, and satisfies the legal, ontologic, and common-sense requirement of irreversibility of death. Fourth, premortem interventions are done solely with the intent to improve donated organ function, and if there is any potential to hasten or cause death this effect is both unintended and unavoidable. Finally, declaring death based on permanent absence of circulation at 2-10 minutes conforms with accepted medical standards, and with the intent of the law regarding irreversibility and criteria for death. We will argue that none of these claims can withstand careful scrutiny.
III The Irreversibility of Death
In discussing death it is useful to review the paradigm used to define it: death is an irreversible biological/ontological event. As Bernat has argued, death is an event separating the process of dying (living, while it seems death is near) from the process of disintegration [8][9][10]. Bernat argues that death is a biological univocal ontologic state of an organism, and irreversible ("if the event of death were reversible it would not be death but rather part of the process of dying that interrupted and reversed" [ [8]p37]; "no mortal can return from being dead, any resuscitation or recovery must have been from a state of dying ") [ [8,9]p124 ,8]. Others have argued for the same paradigm, including the President's Commission [11]. Philosophically, death is the irreversible state where there has been loss of the integrative unity of the organism as a whole; the organism is no longer more than the sum of its parts, and irreversibly cannot resist the disintegration entailed by the forces of entropy [9][10][11][12]. The problem is that this accepted conception of death and irreversibility is not compatible with DCD. We aim to clarify the debate in the literature on this point (Table 1).
Ontology and 'construals' of irreversibility
The ordinary sense of the meaning of irreversible is "not capable of being reversed," [13] and it "depends on what physically can or cannot be done" [ [14]p77]. The plain meaning is that "no known intervention could have eliminated it" [ [15]p26]. If a condition is never actually reversed it is permanent, but if a condition never could be reversed it is irreversible [15]. In other words, irreversibility entails permanence; permanence does not entail irreversibility [15]. The consensus on the moral acceptability of DCD argues that a so-called weak 'construal' of 'irreversible' is 'permanent' [1][2][3][4][5]. We argue that 'permanent' is not a construal of 'irreversible' at all; indeed, Bernat agrees when he writes "the weakest construal falls outside the domain of irreversibility altogether, and resides properly within the domain of permanence" [ [9]p125]. Marquis provides an example that makes this commonsense point: Suppose that Joe has a heart attack and his circulatory function stops. Fred, a physician standing next to Joe, refuses to perform CPR on Joe because Joe is a rival... Suppose that CPR would have been successful, but because it was not performed, cessation of Joe's circulatory function was permanent. Was Fred's refusal to act wrong? Not if we understand the irreversible cessation of circulatory function as equivalent to the permanent cessation of circulatory function... On that understanding, Joe was dead as soon as he collapsed, and Fred's failure to perform resuscitation was not wrong, for he had no obligation to resuscitate a corpse [ [15]p26].
Moral decisions and the ethical/legal obligation not to resuscitate
An objection to this scenario would be that in DCD an informed autonomous moral decision has been made not to attempt resuscitation, and this is why permanent is an acceptable 'construal' of 'irreversible' in determining death [1][2][3][4][5]. This initially persuasive argument fails. As Alister Browne writes, this tries "to perform the trick of explaining how a physically reversible state can be described as 'irreversible' by treating irreversibility as a legal/moral concept... In ordinary usage, whether a person is dead or alive depends on what physically can or cannot be done to reverse that state, not on what legally/morally can or cannot be done" [ [14]p77]. Marquis elaborates on three problems with appealing to this normative ethical sense of 'irreversible' [15]. First, 'reversible' is a dispositional property having the capacity to exhibit the occurrent property of 'being reversed' in the appropriate circumstances. Similar dispositional properties are terms like 'fragile', and 'soluble'. So, the question is, does it make sense to say: it would be wrong (ethically) for me to deliberately break your fine china, therefore, your fine china is not fragile; or it would be wrong (ethically) for me to dissolve your gold ring in acid, therefore, your ring is insoluble. Second, the necessary condition of having an obligation to not resuscitate the DCD donor is that the donor patient is alive. In the example above, Fred had an obligation to resuscitate Joe because a "necessary condition of Fred's having an obligation to Joe is that both Fred and Joe exist [are alive]" [ [15]p28]; that is why it was wrong for Fred not to resuscitate Joe. Similarly, in DCD, it would be wrong for a physician to attempt to resuscitate the donor patient. But, this is because death's irreversibility has ethical consequences, the fact that a person is dead is the basis for a change in our obligations, and the donor patient was not yet dead. Third, many patients in exactly the same state would be considered dead or alive depending on whether resuscitation will be attempted; "but death is a state of a body" [ [15]p29]. Truog makes this point when he describes three patients in the identical physiological state who are dead (the DCD donor), alive (CPR is started), and alive (the DCD donor whose parents change their minds and demand resuscitation at 5 minutes) [16]. Marquis offers another example: An individual is in a severe automobile accident and arrives in the ER. You are the ER physician. You judge that the patient's blood loss is so great that the patient will soon die unless she receives a blood transfusion. Her surrogates decline the transfusion because she is a Jehovah's Witness. You respect the refusal and she dies. You would say, of course: 'Her condition was reversible! I wish I could have transfused her!'... you would be wrong to say that... since reversing the patient's condition was not legally or morally permissible, the patient should have been viewed as being in an irreversible condition...[ [15]p29].
Although some may argue that the state is just as good as death, or is very close to death, we cannot say it is death. "To say it is makes a word mean what it does Table 1 Clarification of the arguments surrounding the interpretation of the 'irreversibility' of death Absent circulation is irreversible at 2-10 minutes Absent circulation is not irreversible at 2-10 minutes Permanent is a reasonable 'construal' of irreversible. The ordinary meaning of irreversible is 'not capable of being reversed.' Permanent is not a 'construal' of irreversible at all.
There is a moral/legal obligation not to resuscitate. Irreversible is not a moral/legal concept. The obligation to or not to resuscitate is due to the patient being alive. Death is a state of a body, and those in exact states cannot be both dead and alive.
There is no difference in outcome by waiting for irreversibility.
This admits that permanence is a prognosis of death, not a diagnosis of death. The DCD donor is living (even if he/she may be dying).
Autoresuscitation does not occur after 65 seconds of absent circulation.
This is based on inadequate data (n = 5), and tries to explain away the Lazarus phenomenon.
Permanence accords with accepted medical standards and the intent of the law. This is misleading and inaccurate. This ignores ontologic and moral issues. This mischaracterizes accepted medical standards. The intent of the law was not 'permanence'.
Brain death is not required to diagnose death. The intent of the law is that there is only one death per person. DCD donors are not brain dead.
not, and to do that without warning is necessarily to mislead" [ [14]p78].
Consequentialism, utility, and prognosis
It is interesting that the proponents of DCD may in some way recognize these flaws in the declaration of death in the donor. Consensus statements simply state that 'permanent' was the 'construal' of 'irreversible' accepted by the panels [1][2][3][4][5]. More recently, it has been admitted that the declaration of death in the donor is "a compromise on biological reality" [ [9] p128], "an approximation" [[8]p41], "inconsequential" [ [9]p129], "a valid proxy... produces no difference in donor outcome" [ [5]p975], and a perfect indicator/surrogate/proxy of irreversible [8][9][10]. In other words, "permanent cessation of circulation constitutes a valid proxy for its irreversible cessation because it quickly and inevitably becomes irreversible and because there is no difference in outcome between using a permanent or irreversible standard" [ [17]p14]. Indeed, it has been argued on a utilitarian basis that, "the good accruing to the organ recipient, the donor patient, and the donor family resulting from organ donation justified overlooking the biological shortcoming because although the difference in the death criteria was real, it was inconsequential' [ [8]p41].
We believe that these arguments may be used to make a case for organ donation in the setting of DCD; however, they cannot be used to argue that the donor is dead. What these arguments show is that the DCD donor has an almost certain prognosis of death, is in the process of dying, but is still living and not yet dead. Moreover, the accuracy of the prognosis depends on a future event (whether resuscitation will be attempted, whether autoresuscitation will occur), and to claim the prognosis is certain relies on backward causation. In philosophical terms, the prognosis is a "soft fact" about the past, not determined until the future events occur, and thus not a "hard fact" at all. It is a "serious logical mistake by conflating a prognosis of imminent death with a diagnosis of death" [ [16] p16]. If certain prognosis of death was equivalent to death, then certain patients that are clearly alive would have to be classified as dead; Truog mentions the quadriplegic ventilator dependent man having ventilator withdrawal, the patient dependent on ECMO due to no underlying cardiac function having withdrawal of ECMO, and the patient dependent on tube feeds having these withdrawn [18]. Is a drowning man dead because no one will swim out to save him? Or is he merely going to die? It is a separate question whether the patient can still be a donor while violating the dead donor rule. There is also the question of whether the donor's prognosis is certain.
Autoresuscitation and the Lazarus phenomenon
The consensus statements that approve of DCD were of poor quality when it comes to evidence based medicine, and based on "expert opinion" [1][2][3][4][5]. It is repeatedly claimed that there has been no case of autoresuscitation occurring more than 65 seconds after loss of circulation [1][2][3][4][5]. It is important to point out that this crucial 'fact' is based on very poor data limited by: small numbers (n = 108) in 5 studies conducted from 1912-1970 of patients aged 9 months to 87 years, poorly described patient selection criteria, no description of whether there was continuous clinical monitoring other than ECG, no arterial line monitoring, many patients who did have resuscitation attempts, and only 5 cases where ECG monitoring was stated to have continued more than 2 minutes after loss of cardiac activity (Table 2) [19][20][21][22][23]. Remarkably, one of these studies reported a 25 year old woman who had asystole for 2.5 minutes followed spontaneously by an atrioventricular rate of 33/ min [21].
Another form of autoresuscitation is called the Lazarus phenomenon, which has been reported in at least 32 cases [24]. These published cases describe return of unassisted spontaneous circulation from a few seconds to 33 minutes after failed CPR, although most were not adequately monitored to determine the exact timing of return of circulation [24]. Nevertheless, some cases have had constant EKG monitoring with constant observation and 6 of these had autoresuscitation at 5-7 minutes after absent circulation and asystole; some have had arterial line monitoring in addition to continuous EKG monitoring and constant observation, with 3 of these having autoresuscitation at 3-5 minutes after absent circulation and asystole; and some have had constant observation and were found to have autoresuscitated 8-10 minutes after absent circulation with asystole (Table 3) [25][26][27][28][29][30][31][32][33][34][35][36][37]. These cases are hypothesized not to be similar to withdrawal of life support as in the setting of DCD, because they all occurred after failed CPR [24]. The argument is that dynamic lung hyperinflation during CPR may decrease venous return and, after stopping ventilation, this will reverse and autoresuscitation occur; or, that there was delayed delivery of resuscitation drugs to the heart that led to delayed autoresuscitation [24]. The problem is that these do not explain the cases.
There are likely two types of Lazarus phenomenon: one occurs after a short interval of 1-2 minutes and may be due to resolution of hyperinflation with associated venous return and drug delivery to the heart; the other type with more delayed autoresuscitation cannot be so explained [37,38]. Hyperinflation resolves within seconds of stopping ventilation, as shown by studies documenting lung derecruitment within seconds of disconnection from the ventilator; drug delivery to the heart during asystole is difficult to explain [39]. It is more likely that the Lazarus phenomenon is underreported, and that after CPR (with monitoring more likely to occur) it is more likely to be detected. The Lazarus phenomenon indicates that autoresuscitation can occur after absent circulation of many minutes.
Accepted medical standards and the intent of the law Prominent DCD proponents have claimed that determining death is not primarily an ontologic issue (whether the biological state of the organism is dead), nor a moral issue (whether our obligations to the patient are as if they are dead); rather, it is "fundamentally a medical practice issue" [ [17,40]p1762]. Although ignoring ontology and moral issues is at best concerning, we will examine the claim nevertheless. There are two components to this claim. First, when physicians declare death based on cardiocirculatory criteria in non-DCD settings, they declare death at the moment they verify loss of circulation, without a waiting period [2,5]. Second, the intent of the law, the President's Commission in Defining Death, and the Uniform Determination of Death Act, was a permanence standard of loss of circulation, in accordance with the accepted medical standards [4,5]. Both of these claims are misleading, and inaccurate.
First, the observation that physicians often do not have a waiting time to verify irreversibility of absent circulation when commonly declaring death "is irrelevant in situations where following the dead donor rule (DDR) requires knowing whether the patient is merely dying or already dead... the accuracy that is required of our assessments depends on the consequences of our assessments being wrong... if one holds that the DDR is an inviolate principle of organ donation, then the difference between 'dying' and 'dead' becomes crucial" [ [16] p16]. Joffe argued: [the lead author of the recent Canadian forum on DCD] writes that in the past "observation and confirmation was not required and the irreversibility of death was not a practical concern, although diagnostic errors were made." Acknowledging that "diagnostic errors were made" shows that what death is has been clear in the past. Outside the context of organ donation, when one was claimed to be dead based on the irreversible loss of circulation, if the patient was subsequently revived (by autoresuscitation or intervention) and clearly alive, one simply had to admit that the pronouncement of death was incorrect. I do not believe that in this situation one would continue to insist that the diagnosis of death was correct, and that somehow the patient was revived from the irreversible state known as death. This shows that death in the past was understood as an irreversible state of dis-integration of the organism. However, in the context of DCD, we are now forced to "enhance the rigour of the determination of death," and would in this situation of revival have to explain somehow that the patient in the irreversible (or "permanent") state 'death' has now somehow become alive [ [37]p4].
Accepted medical practice is to declare death when circulation is lost, and retrospectively know this was correct after a period of time that verifies it is irreversible. If this retrospective ability is taken away, then we believe physicians would be aware that death had not yet occurred when circulation stopped.
Second, the claim that the intent of the law, the President's Commission on Defining Death, and the UDDA was a permanence interpretation of irreversible is not tenable. The opinion that permanence was the intent seems to have evolved over time, with Bernat in earlier writings suggesting that it "may" have been the intent, and even earlier, that it was clearly not the intent [8,9]. In the late 1990s both Bernat and Capron (the Executive Director of the President's Commission) clearly did not think it was the intent. Capron wrote in 1999: The Pittsburgh protocol seems less a challenge to the UDDA than simply a contradiction of it. The failure to attempt to restore circulatory and respiratory functions in these patients prevents lawfully declaring that death has occurred because irreversibility must mean more than simply 'we choose not to reverse, although we might have succeeded'... the actual point in each case at which it becomes impossible to reverse the loss of functions would be unaffected... in other words, 'It's hopeless'-he would be confusing a prognosis for a diagnosis... Thus, replacing 'irreversible cessation of circulatory and respiratory functions' with 'we choose not to reverse' flies in the face of the UDDA's underlying premise [ [41]p132].
Bernat wrote in 1998: The cessation of heartbeat and breathing must be prolonged because their absence must be of sufficient duration for the brain to become diffusely infarcted and for the cessation of heartbeat and breathing conclusively to be irreversible... It takes considerably longer than a few minutes for the brain and other organs to be destroyed from cessation of circulation and lack of oxygen. Moreover, it takes longer than this time for the cessation of heartbeat and breathing to be unequivocally irreversible, a prerequisite for death. As proof of this assertion, if cardiopulmonary resuscitation were performed within a few minutes of cardiorespiratory arrest, it is likely that some of the purportedly 'dead' patients could be successfully resuscitated to spontaneous heartbeat and some intact brain function... The brief absence of heartbeat and breathing is highly predictive of death in this context [ [10]. Other authors who were part of the President's Commission on Defining Death have also expressed concerns with DCD protocols, suggesting the intent of the law was not a permanence standard [43][44][45]. The legislation in some states, and the case law from the few cases that exist also make it clear that permanent is not an acceptable interpretation of irreversible [46]. Some have argued that DCD actually is at high risk of breaking the law [47]. It has also been pointed out that "wherever the Commission used the word 'permanent', it was followed by a description of loss of function that cannot recover because of ischemia, damage, destruction, or necrosis [i.e. irreversible damage]. Neither intent nor action not to resuscitate was mentioned as contingencies qualifying 49]. Rady et al also ask "did the President's Commission intend for 'irreversible' to have different meanings within the uniform determination of death act when determining death by a circulatory standard vs. a neurologic standard?" [ [48] p1498,49] In brain death it is accepted that the lost functions must be irreversible, not merely permanent. In DCD, it apparently is not.
Brain death: two deaths, or one death?
Medicine, law, ethics, the President's Commission in Defining Death, the Law Reform Commission in Canada, and proponents of DCD all agree that there is only one death per person [11,41,42,50]. There is one phenomenon, or state, of death. There happen to be two ways to diagnose this unitary state: the tests to confirm the irreversible loss of all critical clinical functions of the brain including the brainstem, and the tests to confirm the irreversible loss of cardiocirculatory and respiratory functions. The state of brain death, so the argument goes, is the state of death, and irreversibly absent cardiocirculatory functions is just the usual way of determining this state of the brain [11,12,51]. As Capron explains, "The reason for alternative standards for determining death is not that we believe there are two kinds of death. On the contrary, there is one phenomenon that can be viewed through two windows, and the requirement of irreversibility ensures that what is seen through both is the same or virtually the same thing. Disregarding the requisite of irreversibility as it applies to either standard is as destructive to the process of determining death as it would be to ignore the requisite of cessation" [ [41]p133]. This well accepted state of affairs is currently being ignored in the setting of DCD.
At 10 minutes of absent circulation the brain is not destroyed, and if resuscitation were implemented, some will survive with some clinical and critical brain functions [52][53][54][55][56][57]. It is very likely that over 15 minutes of absent circulation is required before one can claim that brain death will be likely [11,44,[52][53][54][55][56][57]. Proponents of DCD have argued that permanent absence of circulation is a perfect surrogate for inevitable brain destruction, again conflating prognosis with diagnosis [5]. They also argue that the law states that death can be determined by brain death tests or circulatory tests, not both, ignoring the clear intent of the law [4,5]. They also point out that after less than a minute of absent circulation some studies show there is loss of electroencephalographic brain activity, confirming lack of critical clinical brain functions [4,5]. This argument is odd at best. Electroencephalographic activity detects only cortical activity, and for this reason has been questioned as an ancillary test to diagnose brain death; it does not rule out subcortical activity, nor critical brainstem functions [58]. If one was serious about being sure clinical brain death was present, relying solely on an EEG would be dangerous. Moreover, a recent study of seven adults having withdrawal of life support found all had surges of electroencephalographic activity when the patients were pulseless on arterial line, motionless, and with asystole or ventricular escape rhythms; these surges "last for a few minutes at maximum, but usually last between 30-180 seconds" [59].
At the bottom of (or beyond) the slippery slope We believe that not engaging the arguments we have made questioning some aspects of DCD has led to serious problems. Although an appeal to a utilitarian judgment is tempting, given the good consequences (beneficience) from organ donation, we believe there are also negative consequences. Alister Browne points to some "good-looking" moral principles being violated in DCD: never treat others as mere means, never interfere with the liberty of individuals when they are not doing or threatening harm to others, and never keep information concerning matters of public policy from the people in a democracy [14]. He also points out some undesirable effects: the consequences of discovery of the deception, whether it will set a precedent for deception elsewhere, how it will affect the character of those engaged in it, and the effect on democratic procedures and institutions [14]. He urges that we "see the issues clearly and face them squarely, to understand the choices for what they are, and ourselves for who we are" [ [14]p85].
For proponents of DCD, the reason permanent is a surrogate for irreversible is that the team and patient/ surrogate have decided that attempts to reverse it (which we know would usually be successful) will not be allowed ethically/legally to occur because there is an agreed upon DNR order in place. In the setting of uncontrolled DCD this has resulted in the reinstitution of CPR (manual or by machine), or even institution of cardiopulmonary bypass (extracorporeal life support), once death is declared, in order to preserve the organs [60][61][62][63]. So, the result is a full circle: the patient is dead because we can use a weak 'construal' of 'irreversible', and once that is accepted and the patient is declared dead, we can resuscitate them with CPR and extracorporeal life support, the exact things that were forgone to allow 'death', that were claimed legally disallowed and to justify the creative definition of 'irreversible'.
There are other reasons why our concerns with DCD are even greater in this setting of uncontrolled DCD (Table 4). It is noteworthy that in patients who are dying with absent circulation that have CPR often for over 1 hour, institution of extracorporeal life support (Extracorporeal-CPR) is associated with good survival and neurological outcome in over 40% of adults and children [64][65][66][67][68][69][70][71][72][73][74]. We agree that "the thin line between life and death, between rescue ECLS and in situ organ perfusion" has been crossed [ [74]p753]. The Institute of Medicine wrote that this practice should be supported [75][76][77][78][79]. To be fair, a recent consensus statement admitted that this practice "retroactively negates" the death diagnosis, and should not be done [5]. It is hard to see how an irreversible state can be retroactively reversed. A member of the President's Commission warned of this in advance: Quite often, the carefully wrought initial protocols give way over time to a more 'pragmatic' approach, ultimately allowing interventions that would not have met the stringent initial conditions... Past experience demonstrates that efforts to evaluate the current protocol must anticipate that its current restrictions are likely to be relaxed significantly, here and elsewhere, once the protocol is endorsed in principle and put into practice. In my own view, approving the protocol on the basis that its current restrictive conditions will continue to provide adequate protections is an exercise in self-delusion [ [43] p229].
Arnold and Youngner also warned of the "quieter strategy of policy creep" [6]. Table 4 Increased concerns with the practice of uncontrolled donation after cardiocirculatory death
Area of concern Examples
The decision to withdraw life support is independent of the DCD decision.
The decision to stop CPR is not independent of organ donation. As soon as CPR is stopped, it is clear that organ donation procedures will start. The decision to stop CPR is therefore a decision whether to attempt to save the life versus identify the patient as a donor.
Informed consent is obtained for DCD. Consent is not truly informed. First, a signed donor card is a legally binding and irrevocable decision, but unlikely informed [78,160]. Second, organ preservation is started based on an "opting-out" system, prior to determination of donor status and prior to contacting the family [79]. This "protects rather than infringes the family's prerogative to make decisions [about organ donation]" and "enhances autonomy", allows the family the "opportunity to donate", "preserves family choice", and is an "expression of respect" for the family's choice [75][76][77]. This assumes that the surgical steps taken to preserve organs are "modest", "minimally invasive", and "only slight" [75][76][77][78][79]. These are at best arguable claims.
Absent circulation for 2-10 minutes is permanent, and therefore is diagnostic of death.
The IOM claims that a "hands off period could be very brief and may even be unnecessary" [75], apparently ignoring the cases of Lazarus phenomenon after stopping failed CPR. In addition, re-starting CPR and/or ECMO clearly reverse the absent circulation, and often allow resumed brain activity, and in the context of ECMO, often allow survival with good neurological outcome [64][65][66][67][68][69][70][71][72][73][74]. Death declaration conforms with accepted medical standards and with the intent of the law.
The accepted medical standard when using ECMO to rescue a patient during failed CPR is to cool the patient for 24 hours, then slowly re-warm, and then assess prognosis cautiously.
IV Conflicts of Interest, the Withdrawal of Life Support Decision, and the DCD Decision
The decision to withdraw life support cannot be separated from consideration of DCD Most DCD statements are clear that conflicts of interest shall not influence decisions [1][2][3][4][5]. We believe that this is impossible. First, it is said that the decision to withdraw life support will be independent of the request and decision regarding DCD. The physician discussing withdrawal of life support will be aware of the future option of DCD and will not be able to prevent this from influencing his/ her opinion. Knowledge and experience of the great benefit to patients with organ failures from organ transplantation, of several patients in the hospital now or recently with these organ failures who are desperately awaiting an organ, and of the academic and financial prestige to the institution and colleagues from organ transplantation activities are unavoidable. The psychology of decision making is complex, but it is clear that bias need not be consciously intentional, and that unconscious biases are more potent and pervasive [80,81]. In addition, disclosure of conflicts of interest, while morally required, do not improve the situation, and have been shown to worsen the influence of bias on decisions [82]. Further, it will not be possible to ensure public acceptance of DCD without prospective donors themselves being aware of the possibility of DCD when they are critically ill.
Second, the risk of bias due to the availability of DCD cannot be simply acknowledged and stated to be unacceptable, therefore allowing one to pretend it has gone away. A Canadian multicenter study found that of 341 adult patients who were assessed by a physician on at least one occasion to have a probability of ICU survival of < 10%, 99 (29%) survived the ICU [83]. Even for those where this prediction was made on at least three occasions, the actual survival was 27/120 (22.5%). When the physician predicted a chance of survival of < 10%, patients were more likely to have withdrawal of life support, and this prediction more powerfully predicted ICU mortality than illness severity, evolving or resolving organ dysfunction, use of inotropes or vasopressors, age, and prior functional status [83,84]. Other studies have found large variability in the accuracy of prognostication by intensivists [85]; in the thresholds for and rates of withdrawal decisions among intensivists, ICUs, and hospitals [86][87][88][89][90][91][92]; and that this can and does lead to self-fulfilling prophesies in predicting outcomes [93]. In the Canadian multicenter study, 3.6% of patients having withdrawal of mechanical ventilation in anticipation of death were discharged home [84]; and in an international ICU adult study the proportion of hospital survivors that had withdrawal/limitation decisions ranged from 2.4-30.3% [92]. The concern that DCD will unduly bias these subjective decisions about withdrawal of life support and alter outcome has been raised by others [94][95][96]. We suggest that this bias can have major implications for patient prognosis from critical illness.
The decisions cannot be independent of transplant personnel
Third, proponents of DCD claim that those involved in transplantation will not be the ones who discuss DCD and obtain consent from the patient/surrogate. This is at best misleading. It may be true that the transplant surgeons will not be the ones to explain and request consent for DCD. However, it is not accurate to claim that "no physician who has had any association with a proposed transplant recipient that might influence their judgment shall take part in the determination of death" nor that "attending hospital staff caring for the recipient should be different than staff caring for the donor" [ [4] pS9]. The physicians and nurses caring for terminally ill ICU patients, discussing withdrawal of life support, and discussing DCD, are the same ones who care for critically ill potential organ recipients and critically ill postoperative transplanted patients. Whether they care for the exact recipient of their most recent patient's donated organ is irrelevant. They care for both groups of patients and this creates an unavoidable conflict of interest.
Fourth, we believe that conflict of interest matters are actually encouraged in order to improve adoption of DCD. The organ donation breakthrough collaborative (supported by the U.S. Department of Health and Human Services) has been actively encouraged by proponents of DCD, including the Institute of Medicine [75,77]. The conflicts of interest inherent in this program aimed at increasing donation rates are obvious (Table 5) [97]. The so-called "team huddle" that allows early involvement of procurement coordinators with medical teams and critically ill patients bundles "what is in the patient's best interests (i.e. delivery of appropriate medical care) with the procurement coordinator's primary interest (i.e. securing consent to donate...) [ [49] p1075]. Perhaps most telling is that a solution to perceived barriers to consent and conflicts of interest in obtaining consent is to have a trained representative of the organ procurement organization engage in the "impartial" donation discussion [75,97]. These trained requestors can then emphasize the "opportunity" to donate, that doing so "can save the lives of one or more people", and thereby "give back something to the community in return for what we have received from it through life", that donation "does not harm anyone", and it is "a way of passing on the gift of life to others" [98]. The "Spanish Model" of organ donation with high consent rates is based on: in-house intensive care/ anesthesia physicians who are transplant coordinators and participate in treatment of the patient, and who have a basic low pay with incentive bonuses tied to organ retrieval success; comparisons between centers to foster competition; and less withdrawal of life support resulting in therapeutic ventilation and high proportions of brain deaths after brain injury [99][100][101].
Although most consensus statements recognize potential conflicts of interest, we believe they make misleading claims that these conflicts are dealt with and therefore not a concern.
V Premortem Interventions, and the Principle of Double Effect Premortem interventions are often used to theoretically improve the condition of donated organs in DCD, although there is no evidence that they improve outcomes [102]. These interventions include insertion of cannulas to allow rapid preservative solutions to be infused on declaration of death, administration of heparin to prevent clotting when circulation stops, and administration of phentolamine to improve perfusion to the organs during the process of dying (while still living). There are several problems with these practices, even with consent.
First, these interventions are, in theory, to benefit the organ recipient, not the donor. It is questionable whether the donor can consent to an intervention that cannot benefit him/her and has a significant risk of harm and of hastening or causing death. It has been Conduct regular rounds in high potential ICUs... They are the most likely OPO personnel to identify potential donor cases early; they raise hospital staff awareness...
44,
In house coordinators interacted with families as extensions of hospital nursing staff... OPO staff do not "hover" waiting for organs but do discretely monitor the patient's condition.
14, 56 He is already thinking about organ donation upon the arrival of certain types of patients in the emergency room. 55 Goal is "yes" Getting to an informed "yes" is paramount.
11, 23
Sessions at times 'when staff are hungry'... and bring more than enough food to serve all attendees.
42, 49
Distributes pens, notepads, and mugs. 44 Invites physicians, residents, and nurses to baseball games, hockey games, annual dinners and other outings to maintain buy-in, strengthen relationships, and recognize high performance.
49, 51
Visit high-referring ICUs with dinner... sent the physician a box of his favorite cigars. 49 Business model Strategically recruits high profile members of business and civic community to sit on the board of governors... Strategically appoints top officials from high donor potential hospital to its board... Strategically select influential, potentially pro-donation hospital personnel to serve on their boards... they do expect them to be champions for organ donation and accessible to the OPO for immediate as well as longer-term needs for facilitating organ donation.
20, 37, 45
Orient operations towards outcomes rather than processes. 28 If you secure doctors of high stature, it will facilitate mid-level doctor support. 47 They serve as a 'committee of ears'...
argued that these interventions are unlikely to hasten or cause death [3,4]. We believe that this is inaccurate. If heparin was very safe, we would not carefully consider who we give therapeutic heparin to, and would not worry about life-threatening bleeding complications. If phentolamine was very safe, we would not worry about hypotension induced in patients it is given to. The fact is, these are potentially dangerous medications, with potentially life threatening complications. Bleeding, and hypotension, and anesthesia for inserting cannulas could each either make it more likely that death will ensue on withdrawal of life support, or that death will be hastened after withdrawal of life support. These complications can and do occur even in patients without high risk for their development. For example, in adults the risk of major bleeding associated with therapeutic heparin is up to 3%, is higher in the setting of ischemic stroke or after recent surgery or trauma, and still higher with higher doses of heparin [103]. In children, the risk of major bleeding with therapeutic heparin in PICU patients was 24% [104,105]. In DCD, the dose of heparin used is much higher than the therapeutic doses used in these studies. Moreover, if heparin prevents no-reflow phenomenon in brain, it may actually prolong the time needed for brain death to occur. Second, we agree with others that "to summon the protection of the principle of double effect by claiming that since donation benefits the patient and family by complying with their wishes, use of these agents is permissible, is an extreme example of professional sophistry" [106]. In the principle of double effect as applied to heparin administration, four factors must be present: the action (giving heparin) must be intrinsically good (obtaining functional organs); the bad effect (death) may be foreseen but the agent must only intend the good effect; the bad effect must not be a means to the good effect; and the good effect must be proportional to (compensate for, or outweigh) the bad effect [107]. We question whether the bad effect (death) is not intended, whether the bad effect (death) is not a means to the good effect (obtaining functional organs), and whether the good effect (obtaining functional organs) is proportional to the bad effect (death). Weisbard wrote: ... the so-called 'foreseen but unintended' bad consequence in a double effect argument must be genuinely not desired... The presupposition of the conventional double effect argument is that the patient's death is the foreseen, but unintended and undesired, consequence of an effort to relieve pain... I find it simply impossible to take these assurances at face value in the context of a transaction whose entire purpose and reason for being is to bring about the death of the patient in a fashion that produces viable organs for transplantation... the protocol seeks to recharacterize a coherent, carefully worked out chain of events, leading inexorably to a foreseen, desired and planned result-death and utilization of organs for transplant-as a series of isolated links, each to be understood solely as directed to the patient's needs of the moment, each entirely disconnected from all surrounding context, including the very purpose of the exercise...[ [43]p222].
VI Straw-man arguments
We have found that it is common to respond to some of our concerns with so-called "straw-man" arguments. The Oxford dictionary of philosophy writes "to argue against a straw man is to interpret someone's position in an unfairly weak way, and so argue against a position that nobody holds, or is likely to hold" [108]. The arguments are against points that are not held by the opponent, and include: those opposed to DCD are simply against organ donation; and that we do not acknowledge that the family are often the ones who want to donate in DCD, the family deserves the opportunity to donate and have some good come of a terrible event, organ transplantation saves many lives, and thousands of people die every year on a transplant waiting list [109,110]. To clarify our position on these arguments: we are not against organ donation; the family that wants to donate in DCD is not truly informed when they consent to donation after death; the family does deserve the opportunity to donate in appropriate circumstances; organ transplantation saves many lives; and people do die on transplant waiting lists. Another claim is that surveys show the public is willing to accept DCD [4,111]. This is simply misleading, given that the public in these surveys is asked if they agree to organ donation after death, without explanation about the controversy in diagnosis of death [112,113]. When the controversy is revealed, surveys do not suggest strong support for the deception [112][113][114]. We do not believe these arguments have any bearing on whether the donor in current DCD protocols is actually dead. Further education on these straw-man arguments is not necessary. We agree with Zamperetti et al when they wrote "declaring that the patient's death has already taken place is morally questionable and scientifically untenable... the risk [is] of confusing genuine education and adequate awareness with manipulation of people's opinions..."[ [115]p1674].
There are argument against DCD that we recognize also may be straw-man arguments. The process of DCD has been suggested to alter good end-of-life care. Some have called this "the forgotten donor" [116]. The recent Presidents Council was concerned that: families offered DCD may feel pressured to decide in favor, the process may interfere with good palliative care, the family's emotional needs and mourning may suffer ("considering that loved ones must be kept 'out of the surgeons' way' immediately after the patient's heart stops beating") [ [51]p82], and rushing to make a death determination "could make the donor's death seem like a mere formality" [ [51]p86]. We list this as a straw-man argument because it does not influence the debate about irreversibility of death, and it is unclear whether proponents of DCD would disagree with it. The recent cases of infant heart donation after 75 seconds of absent circulation demonstrate the extremes that DCD protocols have gone to [117]; nevertheless, many proponents of DCD have criticized this extreme time interval to declaration of death making it a position they do not hold [5]. The suggestion that heart transplant after DCD demonstrates the reversibility of cardiac death has been suggested by proponents of DCD as missing the point: it is circulation and not heart function that is to be permanently lost [5,10,118,119]. Some consider this sophistry (after all, the debate about autoresuscitation applies to heart function, and the normal cases rely on diagnosis of the loss of heart function), and are unconvinced (circulation is restored in the recipient, likely demonstrating the reversibility of absent circulation in the donor; saying loss of circulation was permanent because the heart was removed, and the heart was removed because loss of circulation was permanent is circular) [15,[120][121][122]. We do not focus on this debate in this paper.
VII The overwhelming consensus
The concerns with DCD should be debated on their merits without undue deference to prominent consensus statements of expert opinion. Nevertheless, we recognize that several respected groups have published consensus statements that have been said to affirm that DCD as currently done is ethically sound [1][2][3][4][5]75,123]. We do not believe these statements adequately address our concerns.
Consensus statements in general, and those on DCD in particular, are of low quality when assessed by evidence based medicine standards [124]. The main limitations are in the categories of stakeholder involvement, rigor of development, and editorial independence ( Table 6) [125]. Most of the statements involve predominantly transplantation experts and lay public that are either transplant recipients or known to support transplantation, have not systematically reviewed the literature on autoresuscitation or Lazarus phenomenon, have been based on expert opinion, have had unclear panel selection and consensus building methods, and have been funded and organized by transplantation organizations. Many explicitly acknowledge that their objective was not to determine whether DCD complies with ethical norms or the dead donor rule [1,4,5,75,123].
There is a large and growing list of authors who have raised many of our concerns with DCD, and who have concluded that DCD donors are not dead [13][14][15][16]46,48,49,106,115,116,[126][127][128][129][130][131][132]. This list includes the recent President's Council: "In truth, there is reason to doubt that the cessation of circulatory and respiratory functions is irreversible, in the strict sense... To call the loss of functions irreversible, it must be the case that the functions could not possibly return, either on their own or with external help... If this [attempted resuscitation] were to occur, the patient would certainly not have been 'resurrected', but instead would have been (according to the cardiopulmonary standard of death) resuscitated, i.e., prevented from dying" [ [51]p83]. They do not settle the issue, writing that, with more research, "assurance that the heart will not restart on its own within the relevant time frame, combined with an informed decision by the patient and family in favor of controlled DCD, may or may not be sufficient as a moral warrant for declaring death...[ [51]p87]. They do suggest that the family should be informed of "the controversies about irreversibility" and that "there ought to be a broader public discussion and debate about the propriety of controlled DCD" [ [51]p86].
VIII Pediatric Considerations
The practice of DCD has been endorsed by the American Academy of Pediatrics, including the concerning suggestion for "timely referral...[that] may start in the emergency department with the admission of a critically injured child... Deliberation with the OPO should occur before or when...'withdrawal-of-care' or 'do-not-resuscitate' options are being discussed" [ [133]p823]. Of further concern, it was later clarified that when referring to "deciding when human beings are dead... it is not the purpose of this policy statement to answer those questions, but to raise awareness of them" [134].
Concerns with DCD are increased in children. First, there is even less data regarding autoresuscitation and Lazarus phenomenon; only 6 children were included in the studies identified by the Institute of Medicine (Table 2) [19][20][21], and we are aware of only three published cases of (the early form of) Lazarus phenomenon in children [38,135]. This makes determination of "accepted medical standards" in declaring death problematic. Second, conflicts of interest are even more difficult to ignore in pediatric critical care, where units are multidisciplinary, and the same personnel care for potential donors and recipients [136]. Variability in endof-life decisions among pediatric intensivists [87,137], and lack of validated prognostic systems for critically ill children increase the risk of premature determination of withdrawal decisions [138]. Third, it is unclear how to apply informed consent to the DCD setting in children.
In children (particularly under age 14 years), premortem decisions are made using the best interests standard by surrogate decision makers, usually the parents Transplant organizations "To address the increasing experience of DCD and to affirm the ethical propriety of transplanting organs from such donors... [and] to expand the practice of DCD in the continuum of quality end-of-life care." 1. "By new developments not previously reported, the conference resolved controversy regarding the period of circulatory cessation that determines death and allows administration of pre-recovery pharmacologic agents." These new developments were not described nor referenced.
2. Claim there are two different mutually exclusive types of death such that "the cardiopulmonary criterion may be used when the donor does not fulfill brain death criteria." Interdisciplinary panel, 2010 [5] Transplant organization "To re-examine the standards for death determination and to analyze the new protocols' compliance with these standards." 1. Claim that death is "fundamentally a medical practice issue and not primarily a moral or ontological issue [ [40] p1762]." 2. Claim that permanent cessation of circulation "is the meaning of 'irreversible' in the Uniform Determination of Death Act." Therefore, it "is ethically and legally appropriate to procure organs when permanent cessation of circulation has occurred but before irreversible cessation." Yet, the lead authors previously wrote that 'permanent' is not compatible with the intent of the UDDA [41,42].
Institute of Medicine, 1997 [1] Transplant organization "This report examines medical and ethical issues in recovering organs from NHBDs who do not meet the standard of brain death." Unclear "To comment on the issues of timing of death." 1. Suggest a long observation time for certification of all deaths "flies in the face of both logic and the contemporary notion of death certification [p1872]." However, they did not discuss that in the context of DCD, and unlike in the 'routine' death diagnosis, following the dead donor rule requires knowing whether the patient is merely dying or already dead; diagnostic errors are not allowed, a retrospective diagnosis is not possible, and the long time of observation is required.. 2. "Did not achieve unanimity regarding the single 'best' observation period for asystole, apnea, and unresponsiveness." [139,140]. This differs from adults where decisions are often made by a substitute decision maker based on the previously expressed wishes of the patient (made when he/she was competent) [140]. For the never having been competent child the primary concern should be the best interest of the child, based on a complete and truthful assessment of benefits and burdens from the perspective of the child [139][140][141]. The obligation is not to society or the health care system; rather, it is to the child [140]. The best interests standard "does not fit well" with the process of DCD because there is no benefit to the child [2]. It is unclear how to justify the statement that "every family should be given the opportunity to allow their loved one to become an organ donor" [ [133]p826]. The Society of Critical Care Medicine stated that "an altruistic model argues that organ donation will result in such great benefit to both the family of the deceased child and to the recipient families that the intervention is justified. Currently, there is broad support for organ donation following death in pediatric patients after appropriate informed consent" [ [2]p1829]. However, they also write that "altruistic motives for donation cannot be presumed or inferred for pediatric patients" [ [2] p1829]. Particularly in DCD, we are worried children may be used as a means to an end, since only others benefit from the donation. The child may experience anxiety, loneliness, and fear of abandonment/isolation due to lack of optimal palliative care as life support is withdrawn in the operating room [141]. The family may be deprived of the ability to perform death rituals, of holding the child as they die, and extended family/ friends may be excluded from being with their loved one while living their final moments or hours [141]. The effects on the quality of death, and the subsequent grief of the family/friends are unclear. These concerns are not merely theoretical. A recent review of pediatric DCD policies found variable and concerning practices (Table 7) [142]. Similar variability in policies has been found in adult DCD protocols [1,123]. The remarkable variation in DCD protocols, including the timing and techniques to determine absent circulation, suggest a lack of "accepted medical standards."
IX The need for informed consent
That DCD violates the dead donor rule leads to important implications. If the dead donor rule is inviolable, then we must change the practice of organ donation to make it truly consistent with the dead donor rule, risking the lower quality of organs that would be donated. If the dead donor rule is not inviolable, then informed voluntary consent in terminally ill patients to violate the dead donor rule and allow organ donation as the proximate cause of death is required [6,7,126,127,143]. The only argument for maintaining the status quo would be to point out the good consequences that result, including saving lives by organ transplantation and maintaining trust in the medical/transplantation systems [5,8,9,144]. However, we believe that consequentialist calculations in defining death are irrelevant given that our concern is the actual state (death) of the patient. We seek to diagnose the univocal state of death, regardless of the consequences. As Nair-Collins has pointed out, "biological reality [ We believe that truthful, complete, voluntary, fully informed consent to organ donation is required. This best respects patient autonomy [139,140,147]. Signed donor cards and donor registries do not indicate fully Table 6 Consensus statements regarding donation after cardiocirculatory death from prominent medical groups, and some comments (Continued) Canadian Forum, 2006 [4] Transplant organization "To inform and guide health care professionals involved in developing programs for DCD... Discussion at the forum was restricted to optimal and safe practice in the field as it pertains to DCD." 1. Presentations were heart "by experts from international jurisdictions where DCD is currently practiced [pS2]." Adopted "a weaker interpretation" of irreversible, apparently simply echoing the IOM and SCCM reports. 2. Claim that "there has been speculation that a phenomenon known as autoresuscitation may exist." 3. Claimed that "based on animal studies and isolated human case reports electrical function of the brain ceases within 20s after circulatory arrest [pS8]," not acknowledging that this EEG activity reflects only the superficial cortical activity of cerebral hemispheres, not adequate to suggest brain death has occurred [58].
informed consent to organ donation; the information provided to the potential donor on organ procurement organizations websites is at best incomplete [148]. We agree with Browne that "if the aim is not just to maintain trust, but to do so by being trustworthy, deliberate deception that bypasses transparency and consent is forbidden... The real issue at stake is thus not what the IOM identifies, but whether trustworthiness is a value to be sought" [ [14]p85].
A potential challenge to our call for fully informed consent could be the claim that organs are property, and organ donation is governed by gift law. It has been argued that organ donation would thus require only a donation intent, and not informed consent, as there are neither risks nor benefits to a deceased donor from donation, and the decision may occur completely outside the doctor-patient relationship (as in signing a donor card or onto a donor registry) [149][150][151]. This view is reflected in the current OPTN "proposal to update and clarify language in the DCD model elements" that has changed the wording of "consent" [implying informed consent] to "authorization" [152,153]. We believe this change is neither good policy nor an acceptable answer to our concerns, for several reasons. First, authorization of, and the intent of the "anatomical gift" is conditional on the death of the donor; if the donor is unaware that DCD violates the dead donor rule, it would be hard to argue that the gift was voluntarily authorized. By violating the dead donor rule, DCD would be a form of living donation, and donation of a vital organ a form of physician assisted death. For this reason, we believe that mere "authorization" would be inadequate, and that DCD should surely require fully informed consent when and if allowed at all [154,155]. Second, authorization of the "anatomical gift" outside of the doctor-patient relationship assumes that the diagnosis of death is made objectively by doctors, without (even unconscious) bias in the decision to withdraw life-support or determine death. Of note, the OPTN proposal strikes any reference to the decision to withdraw life support being made before evaluating a patient as a DCD candidate, or of needing confirmatory tests of absent circulation (such as arterial line monitoring) [152]. Third, premortem interventions are not done after death, and have real harms and benefits that require informed consideration by potential donors. In addition, the end-of-life (prior to death) care provided to the donor is altered and should require informed Table 7 Concerns with policies on donation after cardiocirculatory death in children's hospitals in the United States, Canada, and Puerto Rico [142].
Topic of concern Examples
% of protocols
Death determination
Pulselessness can be determined by palpation alone (a highly inaccurate method [161,162]. 14% No specification of method to determine pulselessness. 11% No specification of duration of absent circulation until organ harvest. 10% Fewer than 5 minutes of absent circulation until organ harvest. 10%
Conflicts of interest
Transplant personnel are precluded from declaring death. 88% Transplant personnel are excluded from premortem donor management. 51% Physicians caring for potential organ recipients are excluded from participating in premortem donor management or declaration of donor death.
32%
If the family raises a question about organ donation, donation after cardiocirculatory death can be discussed with the family prior to a withdrawal of life support decision.
21% Premortem interventions
Premortem interventions are prohibited. 3% Premortem heparin is used. 55% Premortem vasodilator(s) are used. 18% Premortem vessel cannulation is used. 36% Consent is required for premortem interventions. 75% Palliative care of donor Medication intended to hasten death is precluded. 44% Withdrawal of life support occurs only in the operating room. 54% Of those having withdrawal of life support in the operating room, the family is allowed to remain until death is declared.
48%
The family is permitted to view the body after organ removal. 27%
Voluntariness of consent
The family can withdraw consent at any time. 16% consent [116]. Fourth, some argue that it is not clear that DCD donors are incapable of an experience of pain or suffering [7,154,156], particularly if circulation is reestablished with CPR or ECMO. Whether raw affective experience from brainstem and subcortical structures [157] is possible at 2-5 minutes after absent circulation is unknown. The United States Center for Medicare and Medicaid Services has held that "... there must be a minimum standard to assure that when families provide consent, they are providing informed consent... potential donor families receive the information they need to make an informed decision about donation..." [158]. A large group of pediatric intensive care clinicians, in responding to our call for a moratorium on DCD, claimed that a moratorium would deprive parents of the "ability to make a truly 'informed' decision about what we should hold sacred, how one chooses to die" [159]. In view of these statements by proponents of DCD, it is surprising that the OPTN proposal seeks to remove an informed consent requirement for DCD. We agree with others that the dead donor rule that is said to justify the gift law interpretation of authorization for organ donation after death hides the normative nature of the donation decision, and "disguises moral judgments by pseudoobjective claims [about death]" [7]. X Conclusions: a call for a moratorium on DCD pending fully informed consent We have argued that DCD donors are not dead, and therefore that organ donation during DCD violates the dead donor rule. Our concerns with DCD include the following: irreversibility of absent circulation has not occurred and the many attempts to claim it has all fail; conflicts of interest at all steps in the DCD process are simply unavoidable; premortem interventions to preserve organ utility are not justifiable; and consensus statements by respected medical groups do not change these arguments. The truth, we believe, is that honesty requires that we face these problems instead of avoiding them. Until the concerns we describe are seriously considered, full public disclosure occurs, and fully informed consent is obtained from donors, there should be a moratorium on the practice of DCD. We believe that DCD is not ethically allowable because it abandons the dead donor rule, has unavoidable conflicts of interests, and implements premortem interventions which can hasten death. These important points have not been, but need to be fully disclosed to the public and incorporated into fully informed consent. These are tall orders, and require open public debate. Until this debate occurs, we call for a moratorium on the practice of DCD.
|
2014-10-01T00:00:00.000Z
|
2011-12-29T00:00:00.000
|
{
"year": 2011,
"sha1": "94b640d5037f8419323be81ac8b1904f3594a7d9",
"oa_license": "CCBY",
"oa_url": "https://peh-med.biomedcentral.com/track/pdf/10.1186/1747-5341-6-17",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16231ae11b3f5646fe09ed3ea9adf1d0c79e1aa7",
"s2fieldsofstudy": [
"Medicine",
"Law",
"Philosophy"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.